Anda di halaman 1dari 11

CS-419/619 Computer Vision

PROJECT REPORT
Estimating Surface Reflectance properties from Images under Unknown Illumination

Pranav P Nair (1000238) Sanid K (1000338) Pragalbh Garg (1000230)

2|Page

CONTENTS# 1 2 3 4 5 6 7 8 9 10 Topic OBJECTIVE ABSTRACT WORK FLOW DATABASE TO BE USED FEATURES TO BE USED FEATURE EXTRACTION CLASSIFIER RESULTS REFERENCE APPENDIX Page No. 3 3 4 5 6 6 7 7 8 8 8 9 10

10.1 10.2 10.3

MATLAB code for unwrapping MATLAB code for feature extraction MATLAB code for classification

3|Page

OBJECTIVE:
To develop a machine vision system to perform reflectance estimation tasks automatically under unknown illumination using wavelet pyramid coefficients and statistical values.

ABSTRACT:
Physical surfaces such as metal, plastic, and paper possess different optical qualities that lead to different characteristics in images. We have found that humans can effectively estimate certain surface reflectance properties from a single image without knowledge of illumination. We develop a machine vision system to perform similar reflectance estimation tasks automatically. The problem of estimating reflectance from single images under unknown, complex illumination proves highly under constrained due to the variety of potential reflectances and illuminations. Our solution relies on statistical regularities in the spatial structure of real-world illumination. These regularities translate into predictable relationships between surface reflectance and certain statistical features of the image. We determine these relationships using machine learning techniques. An ability to estimate reflectance under uncontrolled illumination will further efforts to recognize materials and surface properties, to capture computer graphics models from photographs, and to generalize classical motion and stereo algorithms such that they can handle non-Lambertian surfaces. Also we make use of wavelet domain analysis, which has proven particularly powerful in capturing natural image structure. Distributions of wavelet coefficients at any given scale and orientation are heavy-tailed, falling off much more slowly than a Gaussian distribution. The variance of wavelet coefficient distributions tends to increase in a geometric sequence as one moves to successively coarser scales. Most previous work in reflectance estimation has considered
3

4|Page

the case of point source illumination as a convenient starting point. We wish instead to take advantage of the statistical complexity of natural illumination in estimating reflectance. In a nutshell, our project formulates the problem of reflectance estimation under unknown illumination, presents a framework for solving this problem using the statistical regularity of real-world illumination, and illustrates preliminary results for both synthetic images and photographs.

DATABASE TO BE USED:

The two rows show photographs of the same four spheres under two different illuminations. Each sphere possesses distinct reflectance properties which a human can recognize under different illuminations: (A) shiny black plastic, (B) chrome, (C) rough metal, (D) white matte paint.

5|Page

WORK FLOW

Flowchart for computation of image features, which applies to both testing and training of the classifier. The features are histogram statistics, computed on the original image and on its wavelet transform.
5

6|Page

FEATURES TO BE USED:
We used the classifier with the following 6 statistics as mentioned in the paper:
The mean of the original unwrapped image The10th percentile of the original unwrapped image, The variance of coefficients in the finest radially (vertically) oriented sub-band The variance of coefficients in the second finest radially oriented sub-band The ratio of above two variances The kurtosis of the second finest radially oriented sub-band.

FEATURE EXTRACTION:
Obtain the unwrapped annulus of the original image.
Procedure:
First we assume that the image is a function on Cartesian plane with origin at the center of the image. Now we make spatial transform of the image from Cartesian coordinates to Polar coordinates. Now we specify the annulus region by defining the radii limits and varying the angle as well.

7|Page

CLASSIFIER:
The previously mentioned features (statistics) are now used to build a classifier. Support Vector Machines (SVMs) are used here for classification as they generalize efficiently when there are limited number of training samples and a large number of features. MATLAB has support for only binary class SVM. We then downloaded the code from www.mathworks.com and modified it according to our needs

RESULTS:
Training was done for six images under seven illumination conditions. Subsequently, testing was done for another six images under seven illumination conditions rendering 42 test cases. 40 out of 42 cases were classified successfully. Accuracy of the system come out to be = 95.23%

White pearly misclassified as gray shiny


7

Black shiny misclassified as black matte

8|Page

REFERENCE:
1. Published in: Proceedings of the SPIE 4299: Human Vision and Electronic Imaging IV, San Jose, California, January 2001. SPIE Estimating Surface Reflectance Properties from Images under unknown illumination Ron O. Dror, Edward H. Adelson, and Alan S. Willsky 2. http://www.mathworks.in/matlabcentral/fileexchange/33170-multi-class-support-vectormachine

APPENDIX
%-MATLAB code for unwrapping the annulus-%
function outim = unwrap(inputim) %-This function takes an image as input and unwraps the annulus with-% %-x axis being the zero degree line -% %-Method: I = unwrap(image)-% [x,y]=size(inputim); orig=[x/2,y/2]; for i = 1:x for j =1:y [th,r] = cart2pol(i-orig(1),j-orig(2)); if(th<0) th=th+2*pi; end th=(th*180)/pi; image(round(r)+1,round(th)+1)=inputim(i,j); end end % outim=image; outim=image(79:239,:); %-the limits 79:239 are for limiting the radii-%

9|Page

%-MATLAB code for feature extraction-%


function [featsvm]= svmfeatures(source) %-Method: M = svmfeatures(sourcepath)-% %-sourcepath is the directory location to your dataset-% clc imgcount=0; orgimgmat=zeros(1,1000); featsvm=zeros(1,6); b=dir(sourcepath); for n = 1:7 a=dir(sprintf('%s%s',source,b(n+2,1).name)); for m = 1:6 image2=imread(sprintf('%s%s%s%s',source,b(n+2,1).name,'\',a(m+3,1).name)); imgcount=imgcount+1; image=unwrap(image2); [x,y]=size(image); for i=1:x for j= 1:y orgimgmat(1,(i-1)*y+j)=image(i,j); end end featsvm(imgcount,1)=mean(orgimgmat); featsvm(imgcount,2)=prctile(orgimgmat,10); [c,s]=wavedec2(image,2,'db2'); var1=var(c(s(1,1)*s(1,2)*2:s(1,1)*s(1,2)*3)); var2=var(c(s(1,1)*s(1,2)*4+s(3,1)*s(3,2):s(1,1)*s(1,2)*4+s(3,1)*s(3,2)*2)); featsvm(imgcount,3)=var1; featsvm(imgcount,4)=var2; featsvm(imgcount,5)=var1/var2; featsvm(imgcount,6)=kurtosis(c(s(1,1)*s(1,2)*4+s(3,1)*s(3,2):s(1,1)*s(1,2)*4+ s(3,1)*s(3,2)*2)); end end

10 | P a g e

%-MATLAB code for classification-%


function [itrfin] = multisvm( T,C,test ) itrind=size(test,1); itrfin=[]; Cb=C; Tb=T; for tempind=1:itrind tst=test(tempind,:); C=Cb; T=Tb; u=unique(C); N=length(u); c4=[]; c3=[]; j=1; k=1; if(N>2) itr=1; classes=0; cond=max(C)-min(C); while((classes~=1)&&(itr<=length(u))&& size(C,2)>1 && cond>0) c1=(C==u(itr)); newClass=c1; svmStruct = svmtrain(T,newClass,'kernel_function','rbf'); classes = svmclassify(svmStruct,tst);

for i=1:size(newClass,2) if newClass(1,i)==0; c3(k,:)=T(i,:); k=k+1; end end T=c3; c3=[]; k=1; for i=1:size(newClass,2) if newClass(1,i)==0; c4(1,j)=C(1,i); j=j+1; end end C=c4; c4=[]; j=1; cond=max(C)-min(C); if classes~=1 itr=itr+1; end end end

10

11 | P a g e

valt=Cb==u(itr); val=Cb(valt==1); val=unique(val); itrfin(tempind,:)=val; end end

11

Anda mungkin juga menyukai