Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Add DefaultSharpen and DefaultSmooth Ops #557

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

gselzer
Copy link
Contributor

@gselzer gselzer commented Jun 11, 2018

This PR adds two Ops adapted from the Sharpen and Smooth ImageJ core plugins.

This PR is currently a WIP because:

  1. There are edge artifacts present in the Ops and I have not yet figured out the difference between the Ops and the plugins. As far as I can tell the Ops are summing up the neighborhood pixels the same as the Plugin, does. Further investigation is needed.

  2. The Plugins should either be deprecated or just a wrapping of these Ops.

@gselzer
Copy link
Contributor Author

gselzer commented Jan 15, 2019

Half of the edge artifacts were fixed by adding the logic present in the plugins that differs output between IntegerType images and non-IntegerType images. Honestly I have not figured out the reason as to why the plugin does this, however for the sake of consistency I have added it to the Ops. The other half of the edge artifacts are due to the miscalculation of height and width in DefaultOverlayService (see imagej/imagej-common#83), meaning that the plugin is not actually smoothing/sharpening the last row/column of its input images. I could fix this issue here by adding <= to the driving for-loops, however this seems like it is avoiding the issue.

@bnorthan
Copy link
Contributor

Hi @gselzer

Do you think the code could be a bit "DRYer"?? I can get almost the same result with a script that is a couple lines long.

#@ OpService ops
#@ ImgPlus input
#@OUTPUT ImgPlus(label="Smoothed") smoothed

// Create the kernel
kernel=ops.create().kernel([[1/9,1/9,1/9],[1/9,1/9,1/9],[1/9,1/9,1/9]] as double[][], new DoubleType())

// Perform convolution
smoothed=ops.filter().convolve(input, kernel)

Of course, there is an issue with my script in that the output and calculations are forced to be 32 bit.

Potentially we could do the following

  1. Create a kernel of the same type as the input using ops.create().kernel()
  2. Re-use NaiveConvolveC for the convolution. I think we'd need to add the option of "normalizing" during the convolution process. With floating point numbers we can just use "normalized" kernels, however with integer types we'd have to normalize after each value is created.
  3. Make sure the boundary conditions are set, such that we get the same behavior as the existing plugin.
  4. Make sure the NaiveConvolveC op handles the case of a ND image and 2D kernel properly (that is it applies the kernel to each slice).
  5. After getting it to work make sure we pre-initialize all ops to avoid op matching during computation.

What do you think??

@gselzer
Copy link
Contributor Author

gselzer commented Jan 18, 2019

That is a good point @bnorthan, I had little understanding/awarenesss of the convolve Ops, so I didn't even think to use them. At this point I have DefaultSharpen providing the same results as before, now using those two Ops, however there is more cleaning to do. It also is correctly handling 3D images

image
(Note that the left is the result from the plugin, I was struggling to get it to display a window title)

@gselzer
Copy link
Contributor Author

gselzer commented Jan 18, 2019

@bnorthan both Ops now pass most of the work on to helper Ops. Using some System.nanotimes in the I found that the Op and the plugin had comparable completion times. Let me know if there is anything else that you see!

@bnorthan
Copy link
Contributor

Hi @gselzer it looks to me like both smooth and sharpen are almost exactly the same. Also it would be nice to be able to take advantage of the work you have done on slice wise processing and normalization for other kernels. Do you think it would be a good idea to make an op called something like SliceWise2DConvolveAndNormalize and then DefaultSmooth and DefaultSharpen would just call SliceWise2DConvolveAndNormalize with the appropriate kernel ?? (In that case scale would be computed in the initialize function or something).

@gselzer
Copy link
Contributor Author

gselzer commented Jan 18, 2019

@bnorthan yes, I was also thinking about that as well, however I did not do it because I had no idea where we would house such an Op, since it does multiple things. I suppose maybe it could go in filter.convolve?

Now that I think about it #556 is also similar in computation, however it is slightly different in the calculation of the value that goes into the output. If we were to do this I think we would want to make the Op a SliceWise2DConvolve, and then let each Op do its own normalization (because in the case of findEdges the scale would be 0 if it was computed like in smooth and sharpen. Would you agree or do you see a better option?

@bnorthan
Copy link
Contributor

Hi @gselzer

You are right we should make the Op a SliceWise2DConvolve and then let each Op do it's own normalization. In the case of the findEdges it looks like it is 2 convolves and then the root of sum of squares of the two convolutions.

One other quick question. Do you think it is necessary to convert to double for the intermediate calculation?? Is there a reason for that?? In the original plugin it looks like everything is done in integer math.

@bnorthan
Copy link
Contributor

Hi @gselzer

A few more thoughts after looking more closely at the IJ1 implementations.

  1. Smooth looks to me like it is just a mean. Or am I missing a detail?? If smooth is just a 9 by 9 mean can we just re-use DefaultMeanFilter and pass it a 9 by 9 rectangle??

  2. Find Edges is a bit specialized because some of the kernel coefficients would be 0, and using convolve would result in un-needed multiplications. Maybe there is a way to implement it by extending AbstractNeightbourhoodBasedFilter

  3. Sharpen is implemented with Convolution. Do you think it be worth it to add an Normalizer function to NaiveConvolve to avoid looping through pixels twice?? (ie instead of calling Convolved followed by Divide integrate the normalization into Convolve, as is done in IJ1).

Just some thoughts. Let me know if I've misunderstood anything.

@gselzer
Copy link
Contributor Author

gselzer commented Jan 25, 2019

@bnorthan I think the only thing preventing us from using DefaultMeanFilter for DefaultSmooth is this line here. If you see a way around this I would be happy to use Mean instead.

As far as FindEdges goes you are probably right with AbstractNeighborhoodBasedFilter being faster / more efficient. I can try coding something up next week if we think it would be worthwhile to do that instead of using some SliceWise2DConvolve. I am just hoping that it will not be too difficult to manage cutting down / aligning the ND image correctly for the 2D kernel as compared to the way we have it working now.

As far as adding a normalizer function to convolve goes I think this could be really powerful, however I am also thinking about separation of concerns here. I feel like adding a normalizer function as a parameter (I think this is the way we would do it?) kind of goes beyond the scope of what a convolve function should do. I dunno...

@bnorthan
Copy link
Contributor

Hi @gselzer

  1. Have you tested to see if DefaultMeanFilter and DefaultSmooth produce the same result?? I think the line in question is used for rounding purposes, and the setReal function in AbstractIntegerType may serve the same purpose. What do you think?

  2. Is FindEdges much of a priority right now given the ongoing redesign efforts?? I'm happy to review it further, but would it make sense to hold off until after the next major iteration of ops is released??

  3. You are probably right. No need to make Convolve too complicated unless there is an important reason. When people optimize for speed they may override the op with a native or GPU implementation anyway.

gselzer and others added 5 commits May 19, 2020 14:35
Adapted from the implementations created by Barry Dezonia
The dimensionality check seemed to fail the Op when the input had MORE
than two dimensions. It should fail the Op when the input has LESS than
two dimensions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants