Right, having had a little rethink on the efficiency of the last proposed method, I’ve come up with this…
What it’s basically saying is that it’s more efficient to put pixel data directly into a collection of histograms than it is to extract all the data into superfluous Vectors and HashMaps and then reiterate over them.
For a 64 image sequence, at 352 x 288 resolution, we will be creating a HashMap<Point, RGBHistogram> object, where RGBHistogram class basically contains HashMap<Integer, Integer> objects for red, green and blue. These histograms will use “bins” at intervals of 5, meaning 51 bins per Hashmap (covering the 0-255 range).
To calculate the memory requirements for the entire HashMap of 64 images – in fact, I’ve just realised, the number of images in the sequence is irrelevant! Anyway:
352 x 288 = 101376 pixels, so the main HashMap will have an index of 101376 Points
Each of those points will have 3x HashMap<Integer, Integer>, which contain maximum* 51 values each, so let’s say 153 values.
In total, we’re looking at, at the most, 15510528 values. I’ll have to see if that’s a reasonable amount…
Now we need to do a quick sweep through and distill that large HashMap into a much more manageable HashMap<Point, Vector<Integer>> mostFrequentHashMap, which contains all the Points associated to just three integer values, corresponding to the max frequency values of the R, G and B from each pixel histogram.
NOTE: If any of the R, G, B comparisons fall below a threshold count value, they are not included in mostFrequentHashMap, further reducing the memory requirement. This threshold should take the number of images in the entire sequence into consideration, when examining the maximum frequency found in each case.
For this processed HashMap, we are looking at maximum 101376 x 3 = 304128 values. Easily manageable.
Detecting True “Background” Pixels
Okay, so now we have our mostFrequentHashMap, we can go back and re-process each image individually, extracting each pixel value and comparing its separated RGB values to those in the mostFrequentHashMap.
If we get a null value from mostFrequentHashMap for a given pixel, we move on (that pixel is never a background pixel at any point).
Else if all three of those value are >90% match, then this is a background pixel position which is currently exhibiting the relevant background colour, so we set it as zeroAlpha.
Once the entire image has been done, we re-save it and move onto the next one in the sequence.
* In truth, most of these HashMaps will have many, many “missing” bins. If a particular pixel coordinate never has values within certain ranges, there simply won’t be an entry in the HashMap. To that end, I need to make sure to check if null first when using these objects.