I’m among the legions who fume when the investigator on the TV show zooms in endlessly on a photo to uncover some minute detail that in reality couldn’t have been photographed by any camera. Worst is when the investigator clicks some “increase resolution” button to smooth a bunch of blocky pixels into a richly detailed image…
Although that Hollywood hokum is an information-theory impossibility with a single image, some limits are lifted when you have multiple shots of the same scene. And a start-up called MotionDSP is working on commercializing that technology to improve photo and video quality…
The technology also can get rid of chunky compression artifacts, smooth jagged lines, enrich colors, reveal details, and make text readable. It’s an example of computational photography–or videography in this case–in which sophisticated computer processing can improve a photo or video after it was taken.
MotionDSP has been funded by In-Q-Tel, the Central Intelligence Agency’s venture investment arm, which naturally is interested in software to extract information from grainy or low-resolution images. But the San Mateo, Calif.-based company is raising a new round of funding to underwrite a more consumer-oriented application of its software.
Good tech is always useful. It’s the ends – not the means – that get scary.
Thanks, Pat
Great, now we can finally blow up all these small pixelared porn thumbs from crappy interwebs pre 1996.
I would love to see real world applications of this. It just looks like a noise reduction filter in the inside shot. Otherwise I am calling shenanigans. So what it just notices where the artifact is in each picture and just averages? What if the program saves to jpg?
kind of like reverse anti-aliasing….
This is technology that NASA came up with years (around 10) ago for dealing with images from stars (Tom Cruise excluded.) NASA helped out with trying to id a car that was used in a murder of my best friend’s mother.
A cheap way of doing it is by using photoshop and a few images of planets or the moon.
I have no problem if this is used for the common good.
I have a big problem if it is not used for the common good.
Ha Ha!
If this is actually CIA technology, I want my tax money back.
You can buy the same thing for $45 for years now:
http://www.focusmagic.com/exampleforensics.htm
I guess we should call the blog post “News for people that are behind in technology”
No Comment
Maybe new to the CIA, but this has been around for a LONG time.
I remember experimenting with motion video and frame averaging and subtraction in the mid 90’s with similar results to the example.
I used a similar method to expose those 3d pictures where you have to cross your eyes slightly to see the image. Using two identical imagages and ofsetting them with a translucent filter to show the 3d objects layer by layer. Actually saw bits and pieces that were missed otherwise.
Yes, old news. When I was in College some of my professors were in the business of sharpening spy images from the Hubble telescope. (Yes, in case we forgot, the Hubble is the best spy camera ever invented.) Yes, it was possible to discern quite a lot of detail from space, even down to the small details.
#5 this is not the same, look at the examples, it’s not just an increase in contrast or over sharpening.
Sharpening is being used in the generic form in my comment. The real methodology to create a single “sharpened” image as exhibited in the article is by using Multiple Layered Probability Collapse using Gaussian Root Mean Deviation on a series of coincident images. (Not to be confused with standard Photoshop sharpening).
Statistically speaking, for each picture, each pixel in each picture is given a Gaussian probability of it’s location using a modified bell curve.
Then for each picture the probability of the location of each pixel is summed and normalized. This normalized summation gives the best probability of what the original image actually looked like, which summation can be “played in reverse” to create an image.
But for the layman, let’s just say it’s a form of sharpening.
Clarification, what I meant by each picture was: to create a resultant picture each the Gaussian probability of each overlapping pixel in each individual picture is summed and normalized to create a final “sharpened” picture.
I hope this clarifies things….
I use similar software all the time for astrophotography. It’s known as “stacking”, where you can feed in images from different observing sessions / cameras and get one great image out.
There are numerous systems, from free to a fair amount of money. The programs are quite “smart”, able to register up to several hundred images or frames from movies and produce a final, very sharp hi-res picture.
One example (and quite good software):
http://www.astronomie.be/registax/
Here’s a before / after (no photoshop):
http://tinyurl.com/3a9bmk
You know this is the kind of stuff I would expect from this blog site…computer tech geeky stuff!
Not that I don’t enjoy the ‘occasional’ religion or bigot trolling story that gets everyone rolling up their sleeves… Y’know what I mean?
Ah_yea, sounds like you been around the track with this stuff. 😉
Cheers
I pretend not to be a geek in real life, but sometimes I can’t help myself!!
BubbaRay, you got it exactly right. This is really good stuff!
But does the final “processed” image actually reflect reality or is it just an artifact of the algorithms that were utilized to modify the original image.
Is it Live or is just Memorex?