AI & Neural Networks' Impact on Digital Image Processing

Gordon Cooper

Mar 08, 2021 / 3 min read

When you’re designing an application like a digital TV, you’ll want to deliver the sharpest, most vivid images possible. But there may be times when some resolution is lost. For example, to preserve bandwidth, you might transmit the video stream to the display at a lower resolution and then upscale the footage to meet the display requirements. Fortunately, image processing technology has come a long way to help you meet image quality demands, especially with the emergence of artificial intelligence (AI) and deep-learning processing capabilities. In this blog post, I’ll discuss how AI and deep learning can enhance vision applications through image quality improvement.

Vision applications encompass a wide range of areas, from the digital TV example we’ve discussed to gaming, digital cameras, remote sensing, and multifunction printers (MFPs). To perform as expected, each of these applications demands high-quality images and, in many cases, for these images to be rendered and delivered in real time. A concept called Data Processing Inequality specifies that you can’t add information to an image if that information isn’t already there. So, when starting off with a very pixelated image, there’s only so much magic image-processing software tools can perform. This is why a new level of image processing is needed.

Digital TV

Faster, More Accurate Image Processing

Once trained, convolutional neural networks (CNNs) provide a relatively efficient way to support image enhancement. CNNs have been around since the 1980s, but they really became powerful tools for image processing once deployed on GPUs in the 2000s. After a neural network has been trained, it will always work. CNNs can be trained to recognize as broad a swath of categories as you want, without requiring additional coding for each new parameter.

Over the years, deep learning and neural network algorithms have evolved to encompass super-resolution neural networks that are enabling images to be zoomed in 8x or even 16x for much sharper, more vivid results. Because they’re trained on very extensive datasets, learning-based algorithms are good at scaling images and recovering resolution. They know how to fill in missing data in an original image with data from their training sets.

Running image-processing neural networks on GPUs and CPUs is fine for applications that don’t require real-time results, such as certain video games or restoration of old movies. But say you’ve got an application like an augmented reality game, where you need to render and display a high-resolution image in real time while picking up datasets from another source. In these cases, some GPUs and CPUs lack the processing performance to deliver instant results. On-the-fly image or video upscaling applications—especially those using super-resolution neural networks—call for a dedicated neural network processor. Ideally such a processor should deliver:

  • Fast performance
  • Low power
  • Small footprint
  • Scalability
  • Flexibility to support various neural network graphs

Flexible, Low-Power Vision Processor IP Cores

In our DesignWare® ARC® EV Embedded Vision Processor family, Synopsys provides fully programmable and configurable IP cores for embedded vision applications. The cores, which marry the flexibility of software with the low cost and low power consumption of hardware, integrate an optional high-performance deep neural network (DNN) accelerator for fast and accurate execution of CNNs. It is neural network-agnostic, so it can run any neural network graph as well as custom graphs.

A recent example of success with the vision processor family comes from Kyocera Document Solutions. The company recently achieved first-pass silicon success for its new MFP SoC using DesignWare ARC EV6x Embedded Vision Processor IP with CNN engine and the DesignWare ARC MetaWare® EV Development Toolkit. With the processor IP, Kyocera gained high-performance AI processing capabilities like super resolution, along with flexibility to support future AI models. To speed up ARC EV software development, SoC integration, and system validation, the company used the Synopsys HAPS® FPGA-based prototyping system. Kyocera’s TASKalfa 3554ci series MFP SoC is the industry’s first AI-enabled MFP SoC for on-demand super-resolution printing.

“Implementing advanced AI functionality into our MFP SoC required high-performance, low-power processor IP with a high-quality tool chain, allowing us to find and test AI algorithms while developing the SoC in parallel,” said Michihiro Okada, general manager, software development division at Kyocera Document Solutions Inc. “Only Synopsys DesignWare ARC EV Processor IP and the mature MetaWare EV Toolkit met our flexibility, performance, accuracy, and area requirements.”

The figures below show the image enhancement results that Kyocera can achieve using DesignWare ARC EV Processor IP.

default-placeholder.jpg

Summary

The demand for vivid, sharp images and video will only increase as the market for vision applications continues to grow. Vision processor IP with an integrated CNN engine can provide the high performance, low power, and flexibility to turn what might otherwise be a blurry or pixelated image into a more beautiful outcome.

Continue Reading