Presentation is loading. Please wait.

Presentation is loading. Please wait.

Image-Pair-Based Anisotropic Material Modeling

Similar presentations


Presentation on theme: "Image-Pair-Based Anisotropic Material Modeling"— Presentation transcript:

1 Image-Pair-Based Anisotropic Material Modeling
Jie Feng, Wangyu Xiao and Bingfeng Zhou Peking University

2 Anisotropic material Motivation Silk, satin, hair, brushed steel, ……
Varying surface appearance with different lighting conditions More difficult in modeling and rendering Surface appearance of an object is important in this physical world. Anisotropic materials (such as silk, hair, and steel) are a special category of surface material, because they have varying appearance in different lighting and viewing directions. Hence, they are more difficult in modeling and rendering.

3 Problems of existing methods
Motivation Problems of existing methods Large dataset Dozens or hundreds of input images Complex equipment Specialized optical devices/lighting system Time consuming Data acquiring/Calculation [Dong et al. 2010] [Li et al. 2005] [Gu et al. 2006] [Wang et al. 2008] Some existing methods in literature can produce high-quality rendering results for anisotropic materials, but there are still many problems. First, these methods usually require a large input dataset, some may need up to hundreds of input images. And some of the methods even require specialized equipments, such as special cameras or lighting systems, which are too expensive and complex for ordinary users. They also consume more time in data acquiring and calculations, and are hard to apply in practice.

4 Fast anisotropic material modeling Simple input No expensive equipment
Our Method Fast anisotropic material modeling Inexpensive, limited resource Simple input Texture image + Highlighted image No expensive equipment Ordinary camera / LED light Reflectance & geometry properties Diffuse color / Orientation field / BRDF parameters / Height field Rendered output New viewing and lighting conditions Texture Image Highlighted Image In this paper, we aim to propose a fast anisotropic material modeling method, which should be inexpensive, using limited resources, and easy to implement. Our input data is very simple – only one texture image and one hightlighted image. The images are taken with an ordinary camera and a LED light, no expensive equipment required. From the image pair, we can recover necessary reflectance & geometry properties of the material sample, and produce synthesized rendering images in new viewing and lighting condithions. Rendered Image Ground truth

5 Overview BRDF parameters Illumination Texture Image Orientation Field
Reflectance Texture Image Highlighted Image Input Image Pair Orientation Field Rendered Image BRDF parameters Illumination Reflectance Here is an overview of our workflow: First, we acquire the input image pair. And then, the images are decomposed into intrinsic images, respectively. From the intrinsic images, we step by step calculate: -- An orientation field (that indicate the anisotropic orientations of the material) -- A group of BRDF parameters (that describe the reflectance property of the material) -- A height field (to recover the texture details) Finally, using these information, new images of the material under arbitrary lighting & viewing directions can be rendered. (*Blue arrows indicate the processes of modeling; Red arrows indicate the process of rendering.) Height Field

6 Image Acquiring Material sample Camera Texture image Highlighted image
Nearly planar Camera Ordinary digital camera Fixed position Automatically calibrated Texture image Diffuse light Highlighted image Single LED point light source (with known position) Material sample Calibration target LED light source Texture Image Highlighted Image The acquiring of the input image pair is very simple. Here is the setup. Currently we focus on nearly planar material samples, such as a piece of silk. The images are taken by an ordinary digital camera. It is fixed on a tripod and automatically calibrated with these targets. The texture image is simply captured in a diffuse light. And the highlighted image is taken in a single LED point light source.

7 Intrinsic Image Decomposition

8 Intrinsic Image Decomposition
Decompose the input images into intrinsic images Illumination Image (lighting information at each pixel) Reflectance Image (diffuse color of the material) To avoid the effect of irrelevant component of the input image Illumination Reflectance Illumination Reflectance Before used in modeling, the input images are decomposed into intrinsic images. That is: an illumination image and a reflectance image. We are doing this in order to avoid the effect of irrelevant component of the image. In fact, these two components (illumination of texture, reflectance of highlight) are not used in further computation. Only these two images are used. The reflectance component of the texture image contains diffuse color of the material, and the illumination component of the highlighted images is used to recover the BRDF parameters. Texture Image Highlighted Image

9 Intrinsic Image Decomposition
Utilizing a Retinex-based method [Shen et al. 2008] The reflectance is an intrinsic property of the material Not affected by the non-uniform illumination/shading Recovering image reflectance Pixel color Reflectance Shading Normalizing Intensity-normalized reflectance (chromaticity) Here we adopt a retinex-based method to perform the decomposition. The main idea of the method is that: The reflectance is an intrinsic property of the material, and it is not affected by the non-uniform illumination(shading). Therefore, chromaticity differences are only caused by reflectance changes. And the image decomposition problem can be turned into a problem of recovering image reflectance. Given the color of the pixel (i,j), it is a product of the reflectance and the shading. If we normalize the intensity of the image, we get an intensity-normalized reflectance image R^, that is in fact the chromaticity map of the image. Then we will solve an optimizing problem to calculate the reflectance intensity r_ij ===================================== Retinex(视网膜”Retina”和大脑皮层”Cortex”的缩写)理论是一种建立在科学实验和科学分析基础上的基于人类视觉系统(Human Visual System)的图像增强理论。该算法的基本原理模型最早是由Edwin Land(埃德温•兰德)于1971年提出的一种被称为的色彩的理论,并在颜色恒常性的基础上提出的一种基于理论的图像增强方法。Retinex 理论的基本内容是物体的颜色是由物体对长波(红)、中波(绿)和短波(蓝)光线的反射能力决定的,而不是由反射光强度的绝对值决定的;物体的色彩不受光照非均性的影响,具有一致性,即Retinex 理论是以色感一致性(颜色恒常性)为基础的。     不同于传统的图像增强算法,如线性、非线性变换、图像锐化等只能增强图像的某一类特征,如压缩图像的动态范围,或增强图像的边缘等,Retinex可以在动态范围压缩、边缘增强和颜色恒常三方面达到平衡,因此可以对各种不同类型的图像进行自适应性地增强。正因为Retinex诸多良好的特性,使Retinex算法在很多方面得到了广泛的应用。  对于观察图像S中的每个点(x,y),用公式可以表示为:  S(x,y)=R(x,y)﹒L(x,y) (1)     据Retinex 理论,物体的颜色是由物体对光线的反射能力决定的,而物体对光线的反射能力是物体本身固有的属性,与光源强度的绝对值没有依赖关系。因此通过计算各个像素间的相对明暗关系,可以对图像中的每个像素点做校正,从而确定该像素点的颜色。  ======================================== [Shen 2008]用图的亮度信息对每个像素在RGB空间的颜色值进行归一化,将每个像素邻域像素归一化后的颜色值组成一个向量,并利用马尔可夫随机场原理,假设两个像素如果具有相同的邻域向量,则这两个像素的反射值是相同的。通过这样的假设将图中的所有像素进行分组,每组中的像素在反射图中的亮度成分是相同的,从而将问题的规模大幅度减小。 与[shen 2008]类似,我们利用了基于颜色的Retinex方法,即相邻像素色度值变化大是由于反射属性的变化,而其它变化则是因光照的变化引起的。 特征向量的维度包括RGB,亮度值,邻域均值和方差(不包含x方向梯度和y方向梯度)。 我们的目标是将图像I分解成反射图R和光照图S的乘积, 将I归一化后为I尖,反射图R归一化为R尖,归一化也就是将像素的颜色除以亮度,并令R尖等于I尖,此时的反射图还缺少亮度成分,所以问题就转化为求反射图中的亮度成分r,使得反射图R=R尖乘上r Solving reflectance intensity ri,j

10 Intrinsic Image Decomposition
Solving reflectance intensity ri,j by optimizing: Penalize large illumination derivatives Penalize large reflectance derivatives similar feature vectors → similar reflectance In this optimizing problem, The first two terms: penalize large illumination derivatives The third and the fourth: penalize large reflectance derivatives The fifth term: ensures that pixels with similar feature vectors have similar reflectance Here, α and β are coefficients, and z_ij is a function of the feature vector ρ. The feature vector is defined in the neighborhood of each pixel, contains the pixel color, average illumination, and standard deviation of the luminance ρ: feature vector of each pixel Pixel color Average illuminance Standard deviation of the luminance

11 [Dong et al. 2008] (before interactive refinement)
Intrinsic Image Decomposition Input image [Dong et al. 2008] (before interactive refinement) Reflectance Illumination Output of our method Reflectance Illumination Illumination Reflectance Texture image After solving the reflectance component, the illumination component can also be calculated. Here are two examples of the results. In the third example, we give a comparison with a prior work. Notice that there are obvious shading in the reflection image (b) and white lines in the illumination image (c), while in our result these artifacts are removed.

12 Calculating Orientation Field

13 Calculating Orientation Field
Anisotropic orientation Fine-line textures shown on anisotropic materials Determined by the microstructures Directly related to the anisotropic appearance Extracting the orientation field Prior works: multiple input images, varying light sources Our method: use only one input texture image Initial Orientation Field Removing Noises Optimized Orientation Field Interpolated Orientation Field Texture Image

14 Calculating Orientation Field
Texture Image Initializing Image gradient (Sobel operators) Dominant orientation angle of an st patch: Optimization Patch coherence for measuring fitting error Discard patches with poor coherence Recalculate according to the N valid neighbors Interpolation Smooth and continuous orientation field Optimized Removing Noises Initialized Interpolated

15 Calculating Orientation Field
Results

16 Estimating BRDF Parameters

17 [Ashikhmin and Shirley 2000]
Estimating BRDF Parameters Ashikhmin-Shirley BRDF model Anisotropic parameter model Microfacet distribution Specular and diffuse component Four parameters: nu and nv — control the shape of the specular lobe Rd and Rs — the diffuse / specular colors [Ashikhmin and Shirley 2000]

18 Estimating BRDF Parameters
Obtain BRDF parameters by optimizing: I (Rs; Rd; nu; nv) — Rendered image I0 — Illumination component of the highlighted image ssim(I; I0 ) — Structural similarity of two images [Wang et al. 2004] Results Similar appearance in illumination Lack of some texture details Sample 2 Red Silk Sample 1 Blue Silk

19 Computing Height Field

20 Computing Height Field
Estimate a height field to recover texture details Feature vector ρ for each pixel The luminance, the gradient along x and y direction Similar feature vectors → similar height Calculating height field H by optimizing: I’ — illumination image I(H) —the rendered image Ni — the nearest pixels of pixel i

21 Computing Height Field
Input texture image Calculated height field Rendered without height field Rendered with height field Input illumination image

22 Rendering Results

23 New Image Synthesizing
Reflectance image Diffuse component Orientation Field Diffuse image Rendered image BRDF parameters Specular component A-S BRDF model Height Field

24 Experimental Results

25 An efficient anisotropic material modeling method
Conclusions An efficient anisotropic material modeling method Requires only one image pair No complex equipments Quick and inexpensive material approximation Future Work Reconstruct a spacial-varying normal distribution Increase the precision of the orientation field and the BRDF parameters Extend to model inhomogeneous materials

26 Email: feng jie@pku.edu.cn
Thank you! feng


Download ppt "Image-Pair-Based Anisotropic Material Modeling"

Similar presentations


Ads by Google