You probably have come across the fact somewhere that computers and digital devices understand and process information in terms of binary language or so called ‘1s’ and ‘0s’. They have to convert the input into voltage signals that are only ‘high’ or ‘low’ corresponding to 1 and 0 respectively. That is, the signal doesn’t vary continuously over time but rather in discrete steps of either ‘high’ or ‘low’. This makes the processing of the input very easy and has tremendous benefits over the analog way. It can be stored, modified, sent to other devices located thousands of kilometres away within seconds!
We are going to look at a digital component that performs the crucial first step of converting the analog input into discrete voltages when you click photos with your smartphone or DSLR! (The analog input here is simply the light captured from the surroundings). This component is called ‘Image sensor’. A quintessential part of all digital cameras. If you are reading this on a smartphone, there's an image sensor lying just millimetres away from you.
A CMOS sensor
Image source: https://canon-cmos-sensors.com/
We will see the simplified functioning of CMOS(complementary metal-oxide semiconductor) image sensors. They made possible the use of digital cameras by millions of people in their daily lives and now dominate the digital camera market, sidelining the CCD(Charge-coupled device) sensors which were once the only image sensors in existence.
Every camera boasts of a megapixel count. Even the budget range smartphones nowadays have a primary camera of 12 MegaPixels. But what does this number indicate? It is not the ‘pixels’ on a digital screen. It's the number in millions of the "photosites" in a camera's image sensor. So there are 12 million photosites on the image sensor of a 12MP camera!
Each photosite is like a cavity that captures light coming from the primary lens at the outside of the camera. This light then passes through micro lenses placed above each photosite when the shutter is opened to click a photo. The shutter exposes the frame from top to bottom. Each row of pixels (from top to bottom) is captured in sequential order, creating a rolling effect.
Color filters(Red Green Blue-RGB) placed above each photosite(below the microlenses) allow light of only one color to enter the photosite. In an image sensor half of the photosites have green filters since it is more sensitive to our eye and the rest half is divided equally for blue and red.
But you don't see only one of the RGB in a pixel of a photo captured. The data for the other two is 'made up' from the adjacent pixel's data through algorithms.
Image source: www.tel.com
Each photosite has a photodiode and 3 or more transistors in a circuit which does the crucial job of converting light to voltage. The overview of what happens in the circuit is as follows. Light after passing through the color filter impinges on the photodiode and so, it accumulates electric charge proportional to that amount of light. The circuit is such that part of this charge leaks to ground potential, which changes the initial voltage(reset voltage). This voltage change affects a transistor’s resistance and hence the current flowing in one part of the circuit. So, the current value and hence the output voltage is directly affected by the amount of light that falls on the photodiode.
After the voltage value and the row and column positions of the photosite(the required info) are read, the initial voltage is reset. The output voltage is amplified and sent to an analog to digital converter that converts everything to binary form. Further processing such as making up the data for other colors in a pixel takes place after that. So we can see now that more photosites in a sensor would lead to a more detailed image as you divided the frame into more number of pixels each having its particular color value.
The commercialization of CMOS sensor technology in 1995 quickly became a huge success. It found applications in areas like webcams, low-cost cameras, PC cameras, multimedia, surveillance, and videophones. By 2007 its sales surpassed the CCD sensors. This massive shift came primarily because CMOS sensors offered low power consumption and have the same basic structure as microprocessors, and hence can be mass produced using the same well-established manufacturing technology, making their production much cheaper than that of CCDs. CCDs are still best suited for high end industrial and other applications as they have better light sensitivity per pixel and some other advantages over CMOS .
Companies are investing more and more in the advancement of CMOS technology. Every year we see smartphone cameras achieving substantial improvements. Besides other aspects that help improve image quality and camera performance, improving the image sensor always remains the priority as it plays a crucial role in the image’s quality.
As demand for higher details and resolutions is increasing, more pixels are accommodated on sensors of the same size by decreasing pixel size or pitch. But the tradeoff is less light captured per pixel and more digital noise in low lighting conditions. Moreover when the resolution goes over 40MP and 50MP, the capabilities may be beyond the human eye to see what they capture.
Recently Samsung introduced a 108 MP image sensor employed in Galaxy S20 Ultra with many other cutting edge features and a pitch of 0.8 microns.
Reducing pixel pitch for accommodating more pixels on the same-sized sensors seems to have a natural limitation. As the pixel size should not be reduced more than the wavelength of visible light - 0.7 microns for red! But this is not exactly true since sub-wavelength pixel pitch is already a reality. For this to happen, many other complex features need to be optimized. The limitation actually comes from the user end - when they stop seeing the benefit of further reducing the pitch. Several other ways of improvement focusing on different areas exist. Reducing digital noise, increasing quantum efficiency( conversion ratio of photons to electrons) etc. And there is every reason to believe that new possibilities for improvement may arise. Since in the digital age there is always room for improvement!
Name: Deepanshu Bisht
2nd Physics
References:
edmundoptics.com
semiengineering.com
tel.com
Global.canon
https://www.bhphotovideo.com/find/newsLetter/Comparing-Image-Sensors.jsp
http://www.linkwitzlab.com/dpp/A-D-conversion.htm
http://large.stanford.edu/courses/2012/ph250/lu2/
Comments
Post a Comment