Ever wonder why your photos never turn out as amazing those posted by your favorite Instagrammer? There’s probably a lot of post-processing happening in Photoshop you don’t see. But instead of poking at sliders for an hour, computer scientists want to make it incredibly easy for even amateur photographers to achieve results comparable to a professional’s.
In a paper recently posted to the arXiv pre-print server titled Deep Style Photo Transfer, Sylvain Paris and Eli Shechtman from Adobe, working with Fujun Luan and Kavita Bala from Cornell University, detail a new deep-learning approach to post-production and color correction that can automatically apply the visual aesthetics of one photograph (lighting, colors, tone) to a completely different shot, with results that still look photorealistic.
Image-processing algorithms like this are not new, but the results often tend to have a painting-like aesthetic to them. Fine details get lost, straight lines get warped and distorted, and the color changes are applied to broad regions of an image—which is far from ideal since it requires further processing afterwards to fix mistakes.
The goal here was a cleaner one-step transformation, so the research team turned to neural networks and deep learning. They’re phrases that are thrown around a lot today when it comes to artificial intelligence, but are essential approaches to automating a complex process like this. Specifically teaching software to spot and process every possible object on Earth is impossible, but by having it make corrections to thousands of sample images, with feedback on when it’s done a good job, and when it hasn’t, over time the algorithm will adapt and learn.
Eventually, without being taught what a building is or looks like, the algorithm will automatically know that colors appearing in the sky regions of a photo shouldn’t be applied to man-made structures. The new algorithm is also designed to only make tweaks to an image’s colors and…