2

everyone. I am trying to get to boundary dimension of the bubble inside the water using MATLAB. The code and result are shown below.

clear;
clc;
i1=imread('1.jpg');
i2=imread('14.jpg');
% i1=rgb2gray(i1);
% i2=rgb2gray(i2);
[m,n]=size(i1);
im1=double(i1);
im2=double(i2);
i3=zeros(size(i1));
threshold=29;
for i=1:m;
    for j=1:n;
        if abs((im2(i,j))-(im1(i,j)))>threshold  ; 
              i3(i,j)=1;
        else abs((im2(i,j))-(im1(i,j)))<threshold;
              i3(i,j)=0;
        end
    end;
end;
se = strel('square', 5);
filteredForeground = imopen(i3, se);
figure; imshow(filteredForeground); title('Clean Foreground');
BW1 = edge(filteredForeground,'sobel');
subplot(2,2,1);imshow(i1);title('BackGround');
subplot(2,2,2);imshow(i2);title('Current Frame');
subplot(2,2,3);imshow(filteredForeground);title('Clean Foreground');
subplot(2,2,4);imshow(BW1);title('Edge');

Fig

As the figure shows, the result is not very satisfactory. So can anyone give me some advice to improve my result? And how can I output the boundary coordinate to file and get the real dimension of the bubble? Thank you very much!

background

bubble

Ander Biguri
  • 35,140
  • 11
  • 74
  • 120
Jack Ji
  • 29
  • 2
  • Why is not satisfactory? Its pretty good. I'd say dilate and erode the black and white image before computing the edge. If you need more help, provide the original images. – Ander Biguri Jul 20 '18 at 09:42
  • @AnderBiguri Hello, thank you for the reply. I've uploaded the original images. I think the boundary result I got is not smooth. What I trying to get is the outer boundary and more like a triangle shape. Can you give me some advice? Thanks a lot. – Jack Ji Jul 20 '18 at 11:10
  • Do you know you can just do `i3=abs(im1-im2)>threshold`? No need for loops or conditionals! – Cris Luengo Jul 20 '18 at 13:07
  • Following some discussion I had with Cris, and looking at your images [here](https://stackoverflow.com/questions/47888636/how-to-get-the-area-of-the-bubble-in-the-image-using-matlab), I'd say you need to improve your experiment. Make sure you have a good light set up. That would make sure you can in fact remove the background. – Ander Biguri Jul 20 '18 at 14:16

2 Answers2

3

First note that your background removal is almost useless.

If we plot diffI=i2-i1; imshow(diffI,[]);colorbar, we can see that the difference is almost as big as the image itself. You need to understand that what its visually similar to you, its not necessarily similar numerically, and this is a great example for it.

enter image description here

Therefore you dont have what you think you have. The background is there in your thresholding. Then, note that the object you want to segment, its not just whiter. Its definitely as dark as the background in some areas. This means that a simple segmentation by value thresholding will not work. You need better segmentation techniques.

I happen to have a copy of this level set algorithm in my MATLAB, the "Distance Regularized Level Set Evolution".

When I run the code demo_1 with your image, I get the following (nice gif!):

enter image description here

(Uncompressed)

Full code of the demo:

%  This Matlab code demonstrates an edge-based active contour model as an application of 
%  the Distance Regularized Level Set Evolution (DRLSE) formulation in the following paper:
%
%  C. Li, C. Xu, C. Gui, M. D. Fox, "Distance Regularized Level Set Evolution and Its Application to Image Segmentation", 
%     IEEE Trans. Image Processing, vol. 19 (12), pp. 3243-3254, 2010.
%
% Author: Chunming Li, all rights reserved
% E-mail: lchunming@gmail.com   
%         li_chunming@hotmail.com 
% URL:  http://www.imagecomputing.org/~cmli//

clear all;
close all;


Img=imread('https://i.stack.imgur.com/Wt9be.jpg');

Img=double(Img(:,:,1));

%% parameter setting
timestep=1;  % time step
mu=0.2/timestep;  % coefficient of the distance regularization term R(phi)
iter_inner=5;
iter_outer=300;
lambda=5; % coefficient of the weighted length term L(phi)
alfa=-3;  % coefficient of the weighted area term A(phi)
epsilon=1.5; % papramater that specifies the width of the DiracDelta function

sigma=.8;    % scale parameter in Gaussian kernel
G=fspecial('gaussian',15,sigma); % Caussian kernel
Img_smooth=conv2(Img,G,'same');  % smooth image by Gaussiin convolution
[Ix,Iy]=gradient(Img_smooth);
f=Ix.^2+Iy.^2;
g=1./(1+f);  % edge indicator function.

% initialize LSF as binary step function
c0=2;
initialLSF = c0*ones(size(Img));
% generate the initial region R0 as two rectangles
initialLSF(size(Img,1)/2-5:size(Img,1)/2+5,size(Img,2)/2-5:size(Img,2)/2+5)=-c0; 
% initialLSF(25:35,40:50)=-c0;
phi=initialLSF;



potential=2;  
if potential ==1
    potentialFunction = 'single-well';  % use single well potential p1(s)=0.5*(s-1)^2, which is good for region-based model 
elseif potential == 2
    potentialFunction = 'double-well';  % use double-well potential in Eq. (16), which is good for both edge and region based models
else
    potentialFunction = 'double-well';  % default choice of potential function
end  

% start level set evolution
for n=1:iter_outer
    phi = drlse_edge(phi, g, lambda, mu, alfa, epsilon, timestep, iter_inner, potentialFunction);    
    if mod(n,2)==0
        figure(2);
        imagesc(Img,[0, 255]); axis off; axis equal; colormap(gray); hold on;  contour(phi, [0,0], 'r');
        drawnow

    end
end

% refine the zero level contour by further level set evolution with alfa=0
alfa=0;
iter_refine = 10;
phi = drlse_edge(phi, g, lambda, mu, alfa, epsilon, timestep, iter_inner, potentialFunction);

finalLSF=phi;
figure(2);
imagesc(Img,[0, 255]); axis off; axis equal; colormap(gray); hold on;  contour(phi, [0,0], 'r');
hold on;  contour(phi, [0,0], 'r');
str=['Final zero level contour, ', num2str(iter_outer*iter_inner+iter_refine), ' iterations'];
title(str);
Ander Biguri
  • 35,140
  • 11
  • 74
  • 120
  • 1
    That gif is very tasty – Wolfie Jul 20 '18 at 14:21
  • @AnderBiguri Wow! That gif is fantastic! Thank you for the suggestion on experiment and the code is very helpful! – Jack Ji Jul 21 '18 at 03:11
  • 1
    I've tried your code on my other image, some of the results are satisfactory like your gif, some of them are not. I think it might be the experiment set-up is too easy as pointed out by you and @Cris Luengo . Again, thanks a lot :) – Jack Ji Jul 21 '18 at 03:24
  • @JackJi also play with the parameters – Ander Biguri Jul 21 '18 at 23:25
3

Ander pointed out in his answer that the background image doesn't match the background of the bubble image. My very best advice to you is not to try to fix this in code, but to fix your experimental setup. If you fix this in software, you'll get a complicated program with lots of "magic numbers" that nobody will be able to maintain after you graduate and leave. Anybody wanting to continue your work will have a hard time adjusting the program to match some new experimental conditions. Fixing the setup will lead to an experiment that is much easier to reproduce and to build on.

So what is wrong with the background picture? First of all, make sure the illumination hasn't changed since you took it. Let's assume you took the pictures in succession, and the change in background illumination is due to shadows of the bubble on the background.

In your previous question about this topic you got some so-so advice about your experimental setup. This picture is from that question:

experimental setup

This looks really great, you have a transparent tank, and a big white surface behind it. I recommend that you take out the reticulated sheet from behind it, and put all your lights on the white background. The goal is to get back-illuminated bubbles. The bubbles will cast a shadow, but it will be towards the camera, not the background -- they will darken the image, making detection really simple. But you need to make sure there is no direct light falling on the bubbles, since the reflection of that light towards the camera will causes highlights (as you see in your picture) that could be brighter than the background, or at least will reduce contrast.

If you keep some distance between the tank and the white background, then when focusing the camera on the bubbles that background will be out of focus and blurred, meaning that it will be fairly uniform. The less detail in the background, the easier the detection of bubbles is.

If you need the markings from the reticulated sheet, then I recommend you use a transparent sheet for that, on which you can draw lines with a permanent marker.


Sorry, this was not at all a programming answer... :)

So here is what this could look like. An example image with bubbles that we've used in Delft for many decades as an exercise:

bubbles

I actually don't know what it is from, but they seem to be small bubbles in liquid. Some are out of focus, you won't have this problem. Segmentation is quite simple (This uses MATLAB with DIPimage):

img = readim('bubbles.tif');
background = closing(img,25); % estimate of background
out = threshold(background - img);
out = fillholes(out);
traces = traceobjects(out);

output

If you have a background image (which of course you'll have), then you don't need to estimate it. What the code then does is simply threshold the difference between the background and the image (since the bubbles are darker, I subtract the image from the background instead of the other way around), and a very simple post-processing to fill up the holes in the objects. Depending on what your images look like, you might need a bit more preprocessing or postprocessing... Think about noise removal in the input image!

The last line traces the object boundaries, yielding a polygon for each bubble (this last command is only in DIPimage 3.0, which isn't officially released yet, but you can compile it yourself if you're adventurous). Alternatively, use the bwboundaries function from the Image Processing Toolbox:

traces = bwboundaries(dip_array(out));
Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
  • Thank you for your reply. Actually the experiment was conducted before I join the lab so there is nothing I can control. All I can do is get the bubble dimension from lots of high speed camera images. Since the image number is big and get the dimension using CAD software manually can be a tiring job. So here I am, since everything is new to me, thank you again for the code advice. – Jack Ji Jul 21 '18 at 03:20