Presentation is loading. Please wait.

Presentation is loading. Please wait.

D3D编程进阶.

Similar presentations


Presentation on theme: "D3D编程进阶."— Presentation transcript:

1 D3D编程进阶

2 上节课主要内容 DirectX简介 如何创建D3D对象、D3D设备对象 如何绘制顶点 如何对顶点进行变换
This course provides a general introduction and overview to the OpenGL API (Application Programming Interface) and its features. OpenGL is a rendering library available on almost any computer which supports a graphics monitor. Today, we’ll discuss the basic elements of OpenGL: rendering points, lines, polygons and images, as well as more advanced features as lighting and texture mapping.

3 名词解释 纹理映射(Texture mapping):将一个图像映射到一个几何物体的过程叫纹理映射。这个图像叫纹理。(demo)
缓冲区(buffer):位于显存中的一块区域,用于保存光栅化后图像数据。 帧缓冲区(frame buffer):保存颜色的buffer 深度缓冲区(depth buffer):保存成像平面各点处对应的深度。 Swap buffer(交换缓冲):显卡中的两块帧缓冲,互相交换,用于显示动态的效果。(demo) This course provides a general introduction and overview to the OpenGL API (Application Programming Interface) and its features. OpenGL is a rendering library available on almost any computer which supports a graphics monitor. Today, we’ll discuss the basic elements of OpenGL: rendering points, lines, polygons and images, as well as more advanced features as lighting and texture mapping.

4 主要授课内容 了解如何用D3D进行顶点光照计算 了解如何用D3D进行纹理映射 了解如何用D3D装入一个三维模型 了解如何用D3D进行场景漫游
DXFramework介绍 This course provides a general introduction and overview to the OpenGL API (Application Programming Interface) and its features. OpenGL is a rendering library available on almost any computer which supports a graphics monitor. Today, we’ll discuss the basic elements of OpenGL: rendering points, lines, polygons and images, as well as more advanced features as lighting and texture mapping.

5 DirectX 结构 Win32应用程序 DirectX SDK 硬件模拟层:基于软件的模拟 硬件抽象层 底层硬件

6 DirectX 构成 底层的API,面向游戏,图形,多媒体 直接与硬件打交道,只用于Windows操作系统
DX8.0之前业余级, DX8.1以后开始完善 DirectX Graphics:MS DirectDraw, MS Direct3D MS DirectSound: 高层次音频应用 MS DirectMusic: 音乐、音轨、动态多媒体授权 MS DirectShow: 高层次多媒体流获取与播放 MS DirectInput: 用户交互(包括力反馈) MS DirectPlay: 多人联网游戏 Direct Setup: 设置DirectX元素之API Direct Media Objects: 数据流对象,包括Video与Audio的解码编码

7 简单的Direct3D光照例程(Demo4)
创建一个Windows窗口 初始化Direct3D程序 初始化图形数据 处理消息循环 物体图形显示:打开光源 结束Direct3D程序

8 设置顶点格式 struct CUSTOMVERTEX {
D3DXVECTOR3 position; //顶点位置 D3DXVECTOR3 normal; // 顶点所在处的法向 }; D3DXVECTOR3/4:向量类库 #define D3DFVF_CUSTOMVERTEX (D3DFVF_XYZ|D3DFVF_NORMAL)

9 设置绘制状态 InitD3D() d3dpp.AutoDepthStencilFormat = D3DFMT_D16; //设置深度buffer为16位 // 关闭背面剔除 g_pd3dDevice->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE ); // 打开zbuffer消隐 g_pd3dDevice->SetRenderState( D3DRS_ZENABLE, TRUE );

10 设置顶点数据 CUSTOMVERTEX* pVertices;
if( FAILED( g_pVB->Lock( 0, 0, (void**)&pVertices, 0 ) ) ) return E_FAIL; for( DWORD i=0; i<50; i++ ) { FLOAT theta = (2*D3DX_PI*i)/(50-1); pVertices[2*i+0].position = D3DXVECTOR3( sinf(theta),-1.0f, cosf(theta) ); pVertices[2*i+0].normal = D3DXVECTOR3( sinf(theta), 0.0f, cosf(theta) ); pVertices[2*i+1].position = D3DXVECTOR3( sinf(theta), 1.0f, cosf(theta) ); pVertices[2*i+1].normal = D3DXVECTOR3( sinf(theta), 0.0f, cosf(theta) ); } g_pVB->Unlock(); //程序生成一个圆柱的数据

11 绘制场景 // Clear the backbuffer and the zbuffer
g_pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0,0,255), 1.0f, 0 ); //除了清颜色缓冲外,还清深度缓冲 if( SUCCEEDED( g_pd3dDevice->BeginScene() ) ) { // 设置光源和材质 SetupLights(); // 设置变换矩阵 SetupMatrices(); // 绘制顶点 g_pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof(CUSTOMVERTEX) ); g_pd3dDevice->SetFVF( D3DFVF_CUSTOMVERTEX ); g_pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2*50-2 ); g_pd3dDevice->EndScene(); } // 相当于Swapbuffer g_pd3dDevice->Present( NULL, NULL, NULL, NULL );

12 设置物体材质 SetLights(): D3DMATERIAL9 mtrl; //材质接口
ZeroMemory( &mtrl, sizeof(D3DMATERIAL9) ); mtrl.Diffuse.r = mtrl.Ambient.r = 1.0f; mtrl.Diffuse.g = mtrl.Ambient.g = 1.0f; mtrl.Diffuse.b = mtrl.Ambient.b = 0.0f; mtrl.Diffuse.a = mtrl.Ambient.a = 1.0f; g_pd3dDevice->SetMaterial( &mtrl );

13 设置光源 D3DXVECTOR3 vecDir; D3DLIGHT9 light;
ZeroMemory( &light, sizeof(D3DLIGHT9) ); light.Type = D3DLIGHT_DIRECTIONAL; //方向光 light.Diffuse.r = 1.0f;// 光源的颜色 light.Diffuse.g = 1.0f; light.Diffuse.b = 1.0f; vecDir = D3DXVECTOR3(cosf(timeGetTime()/350.0f), 1.0f, sinf(timeGetTime()/350.0f) ); D3DXVec3Normalize( (D3DXVECTOR3*)&light.Direction, &vecDir ); //光源的方向 light.Range = f; //光源的范围 g_pd3dDevice->SetLight( 0, &light );

14 设置光照有关的绘制状态 g_pd3dDevice->LightEnable( 0, TRUE ); //打开第一个光源
g_pd3dDevice->SetRenderState( D3DRS_LIGHTING, TRUE ); //打开光照 g_pd3dDevice->SetRenderState( D3DRS_AMBIENT, 0x ); //设置泛光

15 简单的纹理映射例程(Demo5) 创建一个Windows窗口 初始化Direct3D程序 初始化图形数据(包括设置纹理) 处理消息循环
物体图形显示:打开光源,打开纹理映射 结束Direct3D程序

16 设置顶点格式 struct CUSTOMVERTEX { D3DXVECTOR3 position; // 位置
D3DCOLOR color; // 颜色 //没有法向,因为不做光照! FLOAT tu, tv; // 纹理坐标 }; #define D3DFVF_CUSTOMVERTEX (D3DFVF_XYZ|D3DFVF_DIFFUSE|D3DFVF_TEX1)

17 设置绘制状态 Init3D(): // 关闭背面剔除
g_pd3dDevice->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE ); // 关闭光照 g_pd3dDevice->SetRenderState( D3DRS_LIGHTING, FALSE ); //打开深度测试 g_pd3dDevice->SetRenderState( D3DRS_ZENABLE, TRUE );

18 设置顶点纹理坐标 for( DWORD i=0; i<50; i++ ) {
FLOAT theta = (2*D3DX_PI*i)/(50-1); pVertices[2*i+0].position = D3DXVECTOR3( sinf(theta),-1.0f, cosf(theta) ); pVertices[2*i+0].color = 0xffffffff; pVertices[2*i+0].tu = ((FLOAT)i)/(50-1); //纹理坐标 pVertices[2*i+0].tv = 1.0f; //纹理坐标 pVertices[2*i+1].position = D3DXVECTOR3( sinf(theta), 1.0f, cosf(theta) ); pVertices[2*i+1].color = 0xff808080; pVertices[2*i+1].tu = ((FLOAT)i)/(50-1); //纹理坐标 pVertices[2*i+1].tv = 0.0f; //纹理坐标 }

19 绘制场景 Render函数: 设置矩阵:SetMatrices 设置纹理参数 设置纹理矩阵 绘制场景
清屏: g_pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER, D3DCOLOR_XRGB(0,0,255), 1.0f, 0 ); 设置矩阵:SetMatrices 设置纹理参数 设置纹理矩阵 绘制场景

20 绘制场景(续) //设置纹理 g_pd3dDevice->SetTexture( 0, g_pTexture );
//设置纹理和颜色的混合模式 g_pd3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_MODULATE ); //设置第一个纹理映射函数的参数为纹理 g_pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); //设置第二个纹理映射函数的参数为漫射光 g_pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //关闭透明度融合功能 g_pd3dDevice->SetTextureStageState( 0, D3DTSS_ALPHAOP, D3DTOP_DISABLE );

21 绘制场景(续) D3DXMATRIXA16 mat;
mat._11 = 0.25f; mat._12 = 0.00f; mat._13 = 0.00f; mat._14 = 0.00f; mat._21 = 0.00f; mat._22 =-0.25f; mat._23 = 0.00f; mat._24 = 0.00f; mat._31 = 0.00f; mat._32 = 0.00f; mat._33 = 1.00f; mat._34 = 0.00f; mat._41 = 0.50f; mat._42 = 0.50f; mat._43 = 0.00f; mat._44 = 1.00f; g_pd3dDevice->SetTransform( D3DTS_TEXTURE0, &mat ); g_pd3dDevice->SetTextureStageState( 0, D3DTSS_TEXTURETRANSFORMFLAGS, D3DTTFF_COUNT2 ); g_pd3dDevice->SetTextureStageState( 0, D3DTSS_TEXCOORDINDEX, D3DTSS_TCI_CAMERASPACEPOSITION );

22 装入纹理 // D3DX 可从图像文件中直接装入
if( FAILED( D3DXCreateTextureFromFile( g_pd3dDevice, "banana.bmp", &g_pTexture ) ) ) { //继续找! if( FAILED( D3DXCreateTextureFromFile( g_pd3dDevice, "..\\banana.bmp", &g_pTexture ) ) ) MessageBox(NULL, "Could not find banana.bmp", "Textures.exe", MB_OK); return E_FAIL; }  非常容易哦

23 简单的X模型绘制例程(Demo6) 创建一个Windows窗口 初始化Direct3D程序 初始化图形数据(装入网格数据) 处理消息循环
物体图形显示:绘制网格 结束Direct3D程序 Mesh:网格,即一系列三角形的集合

24 网格结构 LPD3DXMESH g_pMesh = NULL; // 保存在系统内存中的D3DX网格结构
D3DMATERIAL9* g_pMeshMaterials = NULL; // 网格的材质 LPDIRECT3DTEXTURE9* g_pMeshTextures = NULL; // 网格的纹理 DWORD g_dwNumMaterials = 0L; // 网格的材质数目

25 读入网格( InitGeometry())
LPD3DXBUFFER pD3DXMtrlBuffer; //接口->用于保存顶点信息等数据的buffer if( FAILED( D3DXLoadMeshFromX( "Tiger.x", D3DXMESH_SYSTEMMEM, g_pd3dDevice, NULL, &pD3DXMtrlBuffer, NULL, &g_dwNumMaterials, &g_pMesh ) ) ) { // 继续寻找 if( FAILED( D3DXLoadMeshFromX( "..\\Tiger.x", D3DXMESH_SYSTEMMEM, g_pd3dDevice, NULL, &pD3DXMtrlBuffer, NULL, &g_dwNumMaterials, &g_pMesh ) ) ) { MessageBox(NULL, "Could not find tiger.x", "Meshes.exe", MB_OK); return E_FAIL; }

26 设置材质和纹理(InitGeometry())
D3DXMATERIAL* d3dxMaterials = (D3DXMATERIAL*)pD3DXMtrlBuffer->GetBufferPointer(); //获得材质指针 g_pMeshMaterials = new D3DMATERIAL9[g_dwNumMaterials]; g_pMeshTextures = new LPDIRECT3DTEXTURE9[g_dwNumMaterials]; //每个材质一个纹理 for( DWORD i=0; i<g_dwNumMaterials; i++ ) { 从 pD3DXMtrlBuffer获得纹理和材质数据 }

27 绘制网格 (Render()) …. 一般每个网格被分为多个子集(Subset) …
for( DWORD i=0; i<g_dwNumMaterials; i++ ) { // 设置材质 g_pd3dDevice->SetMaterial( &g_pMeshMaterials[i] ); //设置纹理 g_pd3dDevice->SetTexture( 0, g_pMeshTextures[i] ); // 绘制网格的某子部分 g_pMesh->DrawSubset( i ); }

28 D3D向导生成程序(Demo7) 通过D3DWizzard生成一个工程 选择对话框模式 生成一个绘制茶壶的例子

29 工程中自动包含库文件 dsound.lib:声音库 dinput8.lib:输入库 dxerr9.lib:辅助库 d3dx9.lib:辅助库
d3d9.lib:d3d对象 d3dxof.lib:文件管理库

30 CAppForm 从CFormView和CD3DApplication继承 主要的交互事件和显示都在它之内完成
初始化D3D对象:InitDeviceObjects() 生成茶壶网格数据 设置参数:RestoreDeviceObjects() 绘制场景:Render() 场景更新:Framemove() 打开程序

31 顶点变换的例子 Billboard的例子 BumpEarth CubeMap Cull

32 Billboard Billboard is the name given to a technique where a texture map is considered as a three-dimensional entity and placed in the scene. It is a simple technique that utilizes a two-dimensional image in a three-dimensional scene by rotating the plane of the image so that its normal to its viewing direction (the line from the view point to its position).

33 Billboard

34 Billboard The modeling rotation for the billboard is given as where:
Bn is the normal vector of the billboard, say (0,0,1) Los is the viewing direction vector from the view point to the required position of the billboard in world coordinates

35 Billboard The billboard is in effect a two-dimensional object which is rotated about its y axis (for example, like tree) through an angle which makes it normal to the view direction and translated to the appropriate position in the scene. The background texels in the billboard are set to transparent.

36 Bump Mapping Bump mapping is an elegant device that enables a surface to appear as if it were wrinkled or dimpled without the need to model these depressions geometrically. the surface normal is angularly perturbed according to information given in a two-dimensional bump map and this tricks a local reflection model, wherein intensity is a function mainly of the surface normal, into producing (apparent) local geometric variations on smooth surface.

37 Bump Mapping Consider a point P(u, v) on a (parameterized) surface corresponding to B(u, v). we define the surface normal at the point to be Where Pu and Pv are the partial derivatives lying in the tangent plane to the surface at point P P’(u, v)=P(u,v) + B(u,v)N N’=P’u +P’v P’u=Pu + BuN+ B(u,v) Nu P’v=Pv+BvN+B(u,v)Nv

38 Bump Mapping Surface normal

39 Bump Mapping If B is small enough, we can ignore the final term in each equation and we have N’=N+BuN×Pv+BvPu×N Or N’=N+BuN×Pv-BvN×Pu=N+ (BuA-BvB)=N+D Then D is a vector lying in the tangent plane that pulls N into the desired orientation and is calculated from the partial derivations of the bump map and the two vectors in the tangent plane.

40 Bump Mapping Multi-pass technique for bump mapping: to do this , they split the calculation into two components as follows N’·L=N·L+ D·L

41 Environment Mapping Environment maps are a shortcut to rendering shiny objects that reflect the environment in which they are placed. They can approximate the quality of global illumination for specular reflections and do this by rendering the problem of following a reflected view vector to indexing into a two-dimensional map which is no different from a conventional texture mapping.

42 Environment Mapping The disadvantages of environment mapping are:
It is (geometrically) correct only when the object becomes small with respect to the environment that contains it. The effect is usually not noticeable in the sense that we are not disturbed by “wrong” reflection An object can only reflect the environment---not itself---and so the techniques is wrong for concave objects. A separate map is required for each object in the scene that is to be environment-mapped In one common form of environment mapping, a new map is required when the view point changes.

43 Environment Map For a single pixel we should consider the reflection beam, rather than a single vector, and the area subtended by the beam in the map is then filtered for the pixel value. A reflection beam originates either from four pixel corners if we are indexing the map for each pixel, or from polygon vertices if we are using a fast (approximate) map.

44 Environment Map

45 Environment Map Two typical methods of environment mapping which are classified according to the way in which the three-dimensional environment information is mapped into two-dimensions Cubic mapping Spherical mapping

46 Cubic Mapping Cubic mapping: the 3D environment surrounding the object is ‘simplified’ as a 3D Cube The cubic environment map is in practice six maps that form the surfaces of a cube. For example, the view point is fixed at the center of the object to receive the environment map, and six views was rendered.

47 Cubic Mapping Consider a view point fixed at the center of a room. If we consider the room to be empty then these views would contain the four walls and floor and ceiling.

48 Spherical Mapping Spherical Map: the surrounding environment is ‘simplified’ as a sphere. The sphere environment map consists of a latitude-longitude projection and the reflected view vector, Rv, was mapped into (u, v) coordinates as the main problem with this simple technique is the singularities at the poles.

49 Spherical Map An alternative sphere mapping form consists of a circular map which is the orthographic projection of the reflection of the environment as seen in the surface of a perfect mirror sphere.

50 Geometric Shadows Shadows like texture mapping are commonly handled by using an empirical add-on algorithm. They are pasted into the scene like texture maps. Shadows are important in scenes. A scene without shadows looks artificial. They give clues concerning the scene, consolidate spatial relationships between objects and give information on the position of the light source.

51 Shadow & Illumination To compute shadows we need knowledge both of their shape and the light intensity inside them. An area of the scene in shadow is not completely benefit of light. It is simply not subject to direct illumination, but receives indirect illumination. This means we should use global illumination to calculate the shadow.

52 Shadow Types Shadows are a function of the lighting environment.
They can be hard-edged or soft-edged and contain both an umbra and a penumbra area. The relative size of the umbra penumbra is a function of the size and the shape of the light source and its distance from the other object.

53 Shadow Types The umbra is completely cut off the light source, whereas the penumbra is an area that receives some light from the source. A penumbra surrounds an umbra and there is always a gradual change in intensity from a penumbra to an umbra.

54 Shadow Calculation A shadow from polygon A that falls on polygon B due to a point light source can be calculated by projecting polygon A onto the plane that contains polygon B. the position of the light source is used as the center of projection. No shadows are seen if the viewpoint is coincident with the light source. If the light sources are point light source, there is no penumbra to calculate and the shadows has a hard edge.

55 Shadow Calculation For static scenes, shadows are fixed and do not change as the view point changes. If the relative position of objects and light sources change, the shadows have to be re-calculated. Most shadow generation algorithms produce hard edge point light source shadows, and most algorithms deal with polygon mesh models.

56 Simple Shadow on Ground Plane
It suffices for single object scenes throwing shadows on a flat ground plane. The method simply involves drawing the projection of the object on the ground plane. It is thus restricted to single object scenes, multi-object scenes where objects are sufficiently isolated so as not to cast shadows on each other.

57 Simple Shadow on Ground Plane

58 Shadow Generation: Projecting Polygons/scan line
Adding shadows to scan line algorithm requires a pre-processing stage that builds up a secondary data structure which links all polygons that may shadow a given polygon. The algorithm processes the secondary data structure simultaneously with a normal scan conversion process to determine if any shadows on the polygon that generated the visible scan line segment under consideration.

59 Shadow Generation: Projecting Polygons/scan line
If no shadow polygons exists then the scan line algorithm proceeds as normal. For a current polygon: if a shadow polygon exists then using the light source as a center of projection, the shadow is generated by projecting onto the plane that contains the current polygon. Normal scan conversion then proceeds simultaneously with a process that determines whether a current pixel is in shadow or not.

60 Shadow Generation: Projecting Polygons/scan line
Three possibilities: the shadow polygon does not cover the generated scan-line segment and the situation is identical to an algorithm without shadows. Shadow polygons completely cover the visible scan line segment and the scan conversion process proceeds but the pixel intensity is modulated by an amount that depends on the number of shadows that are covering the segment. A shadow polygon partially covers the visible scan line segment. In this case the segment is subdivided and the process is applied recursively until a solution is obtained.

61 Shadow Generation: Projecting Polygons/scan line

62 Shadow Generation: Projecting Polygons/scan line
A representation of these possibilities is shown in the figure. These are, in order along the scan-line: a)Polygon A is visible, therefore it is rendered b)Polygon B is visible and is rendered c)Polygon B is shadowed by polygon A and is rendered at an appropriately reduced intensity d)Polygon B is visible and is rendered

63 Shadow Generation: Shadow Volume
A shadow volume is the invisible volume of space swept out by the shadow of an object. It is the infinite volume defined by lines emanating from a point light source through vertices in the object

64 Shadow Generation: Shadow Volume
Polygons defined by the light source and the contour edges define the bounding surface of the shadow volume. Thus each object considered in conjunction with a point light source, generates a shadow volume object that is made up of a set of shadow polygons.

65 Geometric Shadows Shadows like texture mapping are commonly handled by using an empirical add-on algorithm. They are pasted into the scene like texture maps. Shadows are important in scenes. A scene without shadows looks artificial. They give clues concerning the scene, consolidate spatial relationships between objects and give information on the position of the light source.

66 Shadow & Illumination To compute shadows we need knowledge both of their shape and the light intensity inside them. An area of the scene in shadow is not completely benefit of light. It is simply not subject to direct illumination, but receives indirect illumination. This means we should use global illumination to calculate the shadow.

67 Shadow Types Shadows are a function of the lighting environment.
They can be hard-edged or soft-edged and contain both an umbra and a penumbra area. The relative size of the umbra penumbra is a function of the size and the shape of the light source and its distance from the other object.

68 Shadow Types The umbra is completely cut off the light source, whereas the penumbra is an area that receives some light from the source. A penumbra surrounds an umbra and there is always a gradual change in intensity from a penumbra to an umbra.

69 Shadow Calculation A shadow from polygon A that falls on polygon B due to a point light source can be calculated by projecting polygon A onto the plane that contains polygon B. the position of the light source is used as the center of projection. No shadows are seen if the viewpoint is coincident with the light source. If the light sources are point light source, there is no penumbra to calculate and the shadows has a hard edge.

70 Shadow Calculation For static scenes, shadows are fixed and do not change as the view point changes. If the relative position of objects and light sources change, the shadows have to be re-calculated. Most shadow generation algorithms produce hard edge point light source shadows, and most algorithms deal with polygon mesh models.

71 Simple Shadow on Ground Plane
It suffices for single object scenes throwing shadows on a flat ground plane. The method simply involves drawing the projection of the object on the ground plane. It is thus restricted to single object scenes, multi-object scenes where objects are sufficiently isolated so as not to cast shadows on each other.

72 Simple Shadow on Ground Plane

73 Shadow Generation: Projecting Polygons/scan line
Adding shadows to scan line algorithm requires a pre-processing stage that builds up a secondary data structure which links all polygons that may shadow a given polygon. The algorithm processes the secondary data structure simultaneously with a normal scan conversion process to determine if any shadows on the polygon that generated the visible scan line segment under consideration.

74 Shadow Generation: Projecting Polygons/scan line
If no shadow polygons exists then the scan line algorithm proceeds as normal. For a current polygon: if a shadow polygon exists then using the light source as a center of projection, the shadow is generated by projecting onto the plane that contains the current polygon. Normal scan conversion then proceeds simultaneously with a process that determines whether a current pixel is in shadow or not.

75 Shadow Generation: Projecting Polygons/scan line
Three possibilities: the shadow polygon does not cover the generated scan-line segment and the situation is identical to an algorithm without shadows. Shadow polygons completely cover the visible scan line segment and the scan conversion process proceeds but the pixel intensity is modulated by an amount that depends on the number of shadows that are covering the segment. A shadow polygon partially covers the visible scan line segment. In this case the segment is subdivided and the process is applied recursively until a solution is obtained.

76 Shadow Generation: Projecting Polygons/scan line

77 Shadow Generation: Projecting Polygons/scan line
A representation of these possibilities is shown in the figure. These are, in order along the scan-line: a)Polygon A is visible, therefore it is rendered b)Polygon B is visible and is rendered c)Polygon B is shadowed by polygon A and is rendered at an appropriately reduced intensity d)Polygon B is visible and is rendered

78 Shadow Generation: Shadow Volume
A shadow volume is the invisible volume of space swept out by the shadow of an object. It is the infinite volume defined by lines emanating from a point light source through vertices in the object

79 Shadow Generation: Shadow Volume
Polygons defined by the light source and the contour edges define the bounding surface of the shadow volume. Thus each object considered in conjunction with a point light source, generates a shadow volume object that is made up of a set of shadow polygons.

80 Shadow Generation: Shadow Volume
For a pixel a counter is maintained. This is initialized to 1 if the view point is already in shadow, 0, otherwise. As we descent the depth sorted list of polygons, the counter is incremented when a front-facing polygon is passed and decremented when a back-facing polygon is passed. The value of this counter tell us, when we encounter a real polygon, whether we are inside a shadow volume

81 Shadow Generation: Shadow Volume

82 Geometric Shadows Shadows like texture mapping are commonly handled by using an empirical add-on algorithm. They are pasted into the scene like texture maps. Shadows are important in scenes. A scene without shadows looks artificial. They give clues concerning the scene, consolidate spatial relationships between objects and give information on the position of the light source.

83 Shadow & Illumination To compute shadows we need knowledge both of their shape and the light intensity inside them. An area of the scene in shadow is not completely benefit of light. It is simply not subject to direct illumination, but receives indirect illumination. This means we should use global illumination to calculate the shadow.

84 Shadow Types Shadows are a function of the lighting environment.
They can be hard-edged or soft-edged and contain both an umbra and a penumbra area. The relative size of the umbra penumbra is a function of the size and the shape of the light source and its distance from the other object.

85 Shadow Types The umbra is completely cut off the light source, whereas the penumbra is an area that receives some light from the source. A penumbra surrounds an umbra and there is always a gradual change in intensity from a penumbra to an umbra.

86 Shadow Calculation A shadow from polygon A that falls on polygon B due to a point light source can be calculated by projecting polygon A onto the plane that contains polygon B. the position of the light source is used as the center of projection. No shadows are seen if the viewpoint is coincident with the light source. If the light sources are point light source, there is no penumbra to calculate and the shadows has a hard edge.

87 Shadow Calculation For static scenes, shadows are fixed and do not change as the view point changes. If the relative position of objects and light sources change, the shadows have to be re-calculated. Most shadow generation algorithms produce hard edge point light source shadows, and most algorithms deal with polygon mesh models.

88 Simple Shadow on Ground Plane
It suffices for single object scenes throwing shadows on a flat ground plane. The method simply involves drawing the projection of the object on the ground plane. It is thus restricted to single object scenes, multi-object scenes where objects are sufficiently isolated so as not to cast shadows on each other.

89 Simple Shadow on Ground Plane

90 Shadow Generation: Projecting Polygons/scan line
Adding shadows to scan line algorithm requires a pre-processing stage that builds up a secondary data structure which links all polygons that may shadow a given polygon. The algorithm processes the secondary data structure simultaneously with a normal scan conversion process to determine if any shadows on the polygon that generated the visible scan line segment under consideration.

91 Shadow Generation: Projecting Polygons/scan line
If no shadow polygons exists then the scan line algorithm proceeds as normal. For a current polygon: if a shadow polygon exists then using the light source as a center of projection, the shadow is generated by projecting onto the plane that contains the current polygon. Normal scan conversion then proceeds simultaneously with a process that determines whether a current pixel is in shadow or not.

92 Shadow Generation: Projecting Polygons/scan line
Three possibilities: the shadow polygon does not cover the generated scan-line segment and the situation is identical to an algorithm without shadows. Shadow polygons completely cover the visible scan line segment and the scan conversion process proceeds but the pixel intensity is modulated by an amount that depends on the number of shadows that are covering the segment. A shadow polygon partially covers the visible scan line segment. In this case the segment is subdivided and the process is applied recursively until a solution is obtained.

93 Shadow Generation: Projecting Polygons/scan line

94 Shadow Generation: Projecting Polygons/scan line
A representation of these possibilities is shown in the figure. These are, in order along the scan-line: a)Polygon A is visible, therefore it is rendered b)Polygon B is visible and is rendered c)Polygon B is shadowed by polygon A and is rendered at an appropriately reduced intensity d)Polygon B is visible and is rendered

95 Shadow Generation: Shadow Volume
A shadow volume is the invisible volume of space swept out by the shadow of an object. It is the infinite volume defined by lines emanating from a point light source through vertices in the object

96 Shadow Generation: Shadow Volume
Polygons defined by the light source and the contour edges define the bounding surface of the shadow volume. Thus each object considered in conjunction with a point light source, generates a shadow volume object that is made up of a set of shadow polygons.

97 Shadow Generation: Shadow Volume
For a pixel a counter is maintained. This is initialized to 1 if the view point is already in shadow, 0, otherwise. As we descent the depth sorted list of polygons, the counter is incremented when a front-facing polygon is passed and decremented when a back-facing polygon is passed. The value of this counter tell us, when we encounter a real polygon, whether we are inside a shadow volume

98 Shadow Generation: Shadow Volume

99 Shadow Generation: Shadow Z buffer
No shadows are seen if the viewpoint is coincident with the light source. An equivalent form of this statement is that shadows can be considered to be areas hidden from the light source, implying that modified hidden surface removal algorithms can be used to solve the shadow problem

100 Shadow Generation: shadow Z-buffer
This require a separate shadow-z-buffer for each light source and in its basic form is only suitable for suitable for a scene illuminated by a single light source. Alternatively, a single shadow Z-buffer could be used for many light sources and the algorithm executed for each light source, but this would be somewhat inefficient and slow.

101 Shadow Generation: Shadow Z-buffer
The algorithm is a two-step process. A scene is rendered and depth information stored in the shadow Z-buffer using the light source as the view point. No intensities are calculated. This computes a depth image from the light source, of these polygons that are visible to light source. The second step is to render the scene using z-buffer algorithm.

102 Shadow Generation: shadow Z buffer
This process is enhanced as follows: if a point is visible, a coordinate transformation is used to map (x,y,z), the coordinates of the point in three-dimensional screen space (from the view point) to (x’, y, z’), the coordinates of the point in screen space from the light point as a coordinate origin. The (x’,y’) are used to index the shadow Z-buffer, and the corresponding depth is compared with z’. if z’ is greater than the value stored in the shadow Z-buffer for that point, then a surface is nearer to the light source than the point under consideration and the point is in shadow, thus a shadow ‘intensity’ is used, otherwise the point is rendered as normal.

103 Shadow in Game Adding shadows in the light map
Shadows can be pre-computed for static light sources and added to the light map. The normal basic computer graphics shadow model is used. we calculate the geometry of the shadow but not the reflected light intensity in the area of the shadow. This is easily done by ray-racing.

104 Shadow in Game For each point (x, y, z), corresponding to a light map pixel, we cast a ray to the light source (considered as a point). Any reported obstruction between the point and the light source means that the point is in shadow and the light intensity in the corresponding light map pixel is reduced.

105 Shadow in Game Light map shadows was generated by invoking a ray intersection test from every light map pixel (that is, from the scene corresponding to the light map pixel) towards the light source and reducing the stored light if they intersect an object.

106 Shadow in Game The simplest and fastest approach to generating shadows is to use a texture map blended with the existing image in the frame buffer. This implies that the shadow is of constant shape and size and the texture map can be something like a blurred circle.

107 Shadow in Game The algorithm is as follows:
Fire a single ray from the light source through the origin of the shadow-casting object. The intersect of this ray with the first surface identifies the receiving surface and gives a hit point or reference point for the shadow. The shadow map then becomes a square (say) polygon and is blended into the frame buffer using the appropriate blending function. The depth comparison is used because there may be an object closer to the viewpoint which obscures part of the shadow.

108 课后作业 复习DirectX的tutorials 下载并熟悉DXFramework 阅读新课本相关内容 下周交作业!(向量代数) 

109 内容 三维建模 绘制流程 渲染 光照明计算 游戏中的建模与绘制

110 预期学习内容 三维表示与建模的概念性了解 如何使用三维引擎(如 OpenGL, DirectX)构建场景 对绘制的基本理解
如何对三维场景进行绘制,包括光源设置,光照明计算

111 基本术语 图像 象素:图像的基本单元,无法细分 纹理 纹素?纹元? 体素

112 三维模型表示分类 多边形表示 构造实体几何(Constructive Solid Geometry) 空间剖分表示 双三次参数曲面 隐式曲面
多边形构成的网格 构造实体几何(Constructive Solid Geometry) 利用布尔运算和几何变换组装基本单元 空间剖分表示 将整个场景空间剖分为多个小立方体元素,它们被标记为空或者满. 双三次参数曲面 使用曲面四边形(三次曲线或者曲面) 隐式曲面 隐函数表示

113 多边形表示 三维图形学中的基本表示 多边形模型的创造方法直接 能有效地与当前的渲染算法结合 曲面物体 一般地将曲面物体用多边形模型逼近

114 三维多边形物体的数据结构 两个常用的表示方法 基于边界表示的模型(CAD领域中常用) 基于顶点表示的模型(常用于计算机图形学、游戏中)

115 基于顶点的表示 多边形 1 多边形 2 顶点 1 顶点 2 顶点 3 顶点4 顶点 A 位置(x,y,z) 共享该顶点的多边形列表 顶点 B
顶点 C 顶点 D 顶点 E

116 基于边界的表示

117 CSG 表示 高层次的表示方法,不仅包含形状表示,而且记录了它的构建方式

118 CSG 树 CSG模型表现为一棵树。叶节点包含简单的基本单元,中间节点存储运算算子或者线性变换。
基本单元包括球,圆锥,圆柱等 布尔运算:并,交,补,差等。既是表达形式,也是用户界面技巧。 用户知道基本元素实体并且将他们通过布尔运算算子组合起来。

119 CSG 树实例

120 三维表示的空间剖分方法 空间剖分 考虑整个物体空间,由一系列体素组成 基于体素表示 一个体素是一个用立方体表示的基本元素

121 空间剖分表示的例子 八叉树 每个父节点有8个子节点 四叉树 每个父节点有4个子节点 BSP-二分空间剖分 每个父节点被表示为两个子节点

122 三维场景的分解 场景首先被表示为立方体区域。然后对这个立方体迭代剖分,并对每个剖分出来的子节点赋予属性 属性
黑(全部占有), 白 (空), 灰(部分占有) 基于空间剖分的层次数据结构可描述场景中的物体是如何在空间中分布的。

123 建模与绘制中的空间剖分方法 八叉树/四叉树可用于完全描述场景物体。被物体占有的格子构成了物体的表示。
另外一种方法是:使用该物体的标准表示方法(如多边形模型),然后利用八叉树辅助表示物体在场景中的分布。 BSP 树

124 空间剖分方法的例子 (八叉树)

125 简单的八叉树数据结构例子 struct octreeroot { float Xmin,Ymin,Zmin; /*空间位置最小值 */
float Xmax,Ymax,Zmax; /*空间位置最大值 */ struct octree *root;/* 八叉树根节点 */ }; struct octree { class; /*黑, 白, 灰*/ struct octree root *oct[8] /*如果是灰则往下处理*/ };

126 BSP 树 BSP树中每一个非终止节点表示了一个剖分平面,将空间剖分为两个 平面方程 ax+bx+cb+d≥0 剖分平面的隐含性质
位于平面一侧的物体部可能与另一侧中的任意物体相交 给定场景中的任意视点,与该视点位于同一侧的物体将比位于另一侧的物体的更近

127 BSP 树与多边形物体 BSP树的构造

128 BSP树的处理顺序

129 多边形物体的可见性判断 根据视点位置遍历BSP树 在每个节点处,判断视点是否在节点平面远侧或者近侧 遍历远侧子节点并输出多边形
遍历近侧子节点并输出多边形 这样就形成了当前视点下由远至近的多边形顺序,由此可以用画家算法绘制场景。

130 可见性判断的例子

131 双三次参数曲面 一个双三次参数曲面块由四个角点和四条边组成。四条边分别是三次曲线.
Bezier, B-样条, 非均匀有理B-spline (NURBS) 曲面块的内部是一个三次曲面,其上每个点的位置由函数形式决定。 优点 确切的解析表示 利于三维形状编辑 表达形式简便经济

132 隐式表示 一种基于隐函数表示的以隐式定义的物体构成的表示方法 隐函数表达了在一个局部邻域中的基本元素对该区域的影响,并在此基础上构成了曲面。

133 如果构造多边形建模系统 基于顶点表示的系统 物体的创建、删除等 查询顶点信息 顶点操作(添加、删除) 底层数据结构的操作 基本的建模操作

134 多边形的定义 多边形是由一系列点及通过直线段连接的封闭的形状。 大多数情况下,多边形位于平面 即:多边形的所有点必须共面. 填充的 多边形
非填充的 多边形

135 多边形的表示方法 方法1: 点的有序集合 方法 2: 一系列指向顶点的指针集合. 方法3: 一系列指向顶点的边集合。

136 顶点对象 顶点对象包含下列信息: 一个Point3D 对象 指向一系列多边形的指针 (Polygon3D**) 列表中的多边形的个数
列表中可以存储的最大多边形个数 指向下一个顶点的指针,可为空。

137 顶点对象实例 Class Vertex{ public: Vertex(int num=10);
Vertex(double xVal, double yVal, double zVal); Vertex(Vertex &v); ~Vertex(); Polygon3D * GetNthPolygon(int index); void SetNthPolygon(int index, Polygon3D * poly); void AddPolygon(Polygon3D *poly); void DeletePolygon(Polygon3D *poly); int GetNumPolygons(void) { return numItems; }; Vertex *next; private: Point3D pt; Polygon3D **list; int maxNum,numItems; };

138 顶点对象(续) Vertex::Vertex(int num=10){ pt.x=pt.y=pt.z=0; maxNum=num;
numItems=0; list=new Polygon3D * [maxNum]; if (!list) return; for(int i=0;i<maxNum;i++) list[i]=NULL; next=NULL; } Vertex::~Vertex(){ if (list) delete[] list; list=NULL; maxNum=numItems=0;

139 顶点对象(续) void Vertex::AddPolygon(Polygon3D *poly){ if (!list) return;
for(int i=0;i<numItem;i++) if (list[i]==poly) return; if (numItems>=maxNum){ Polygon3D **temp; maxNum*=2; temp=new Polygon3D * [maxNum]; if (!temp) return; for(int j=0;j<numItems;j++) temp[j]=list[j]; for(int k=numItems;k<maxNum;k++) temp[k]=NULL; delete[] list; list=temp; } numItems++; list[numItems-1]=poly;

140 顶点对象 (续) void Vertex::DeletePolygon(Polygon3D *poly){
if (!list) return; for(int i=0;i<numItem;i++) if (list[i]==poly){ for(int j=i+1;j<numItems;j++)list[j-1]=list[j]; numItems--; return; } Polygon3D * Vertex::GetNthPolygon(int index){ if (index>=numItem) return NULL; if (!list) return NULL; return list[index];

141 顶点对象 (续) void Vertex::SetNthPolygon(int index, Polygon3D * poly){
if (index>=numItem) return; if (!list) return; list[index]=poly; } Vertex::Vertex(double xVal, double yVal, double zVal){ pt.x=xVal; pt.y=yVal; pt.z=zVal; maxNum=10; numItems=0; list=new Polygon3D * [maxNum]; for(int i=0;i<maxNum;i++) list[i]=NULL; next=null;

142 顶点列表对象 该对象包含: 列表中的顶点个数. 列表中最大的可能顶点个数. 指向顶点列表的指针(Vertex **).
Hash查找函数,用于快速定位顶点.

143 顶点列表对象(续) VertexList::VertexList(int size){ tblSz=size;
maxNum=(tblSz - 1) + (tblSz/10 - 1) + (tblSz/ ); list=new Vertex * [maxNum]; if (!list) return; for(int i=0;i<maxNum;i++) list[i]=NULL; }

144 顶点列表对象(续) VertexList::~VertexList(){ if (list){
for(int i=0;i<numItems;i++) if (list[i]){ Vertex *item=list[i]; while (item){ Vertex *next=item->next; delete item; item=next; } delete[] list; list=NULL; tblSz=0; maxNum=0;

145 添加顶点 Vertex * AddVertex(double x,y,z){ int index=hash(x,y,z);
Vertex *item,*prev; if (!list) return NULL; item=list[index]; prev=NULL; while (item){ if ((item->pt.x==x) && (item->pt.y==y) &&(item->pt.z==z)) return item; prev=item; item=item->next; } Vertex *newItem=new Vertex(x,y,z); if (prev==NULL) vertexlist[index]=newItem; else prev->next=newItem; return newItem;

146 删除顶点 void DeleteVertex(double x,y,z){ int index=hash(x,y,z);
Vertex *item,*prev; if (!list) return; item=list[index]; prev=NULL; while (item){ if ((item->pt.x==x) && (item->pt.y==y) &&(item->pt.z==z)) break; prev=item; item=item->next; } if (!item) return; if (!prev) vertexlist[index]=item->next; else prev->next=item->next; delete item;

147 包围体 包围体的原理是:判断包围体之间是否有交远远比判断物体之间是否有交要快。 四种包围体: 球体 AABBs(与坐标轴平行的包围长方体),
OBBs(旋转后的最贴近物体的长方体) discrete orientation polytopes (由多个平面组成的包围体)

148 包围体

149 层次包围体

150 球之间的碰撞检测 计算两个球中心的距离 如果大于它们的半径和,则无交 否则进行精确测试 该方法对于长而薄的物体效率很低。

151 AABB之间的碰撞检测 该方法可以简化为一维空间的操作. 一对AABB如果有交当且仅当它们在某些个轴上的投影有相交的部分。

152 OBB之间的碰撞检测 整个物体可以表示为一个OBB层次树。树的根即整个物体的OBB,叶节点包含一个或多个多边形.

153 精确的碰撞检测—凸多面体 考虑多面体P和Q,以下三步测试如果任意一步通过,则视为碰撞.
首先,检查Q的所有顶点,看是否位于P的内部。对P的顶点做同样操作。 对所有Q 的边与P的面进行穿刺操作。对P的边同样操作。 针对两个多面体完全重合的情形,检查Q的每个面的中心点,看是否位于P的内部。对P做同样的操作。

154 实时策略 开发空/时域连贯性 预处理:又可称做离线处理,例如空间剖分和 包围体就是一种预处理方案。


Download ppt "D3D编程进阶."

Similar presentations


Ads by Google