7 Answers

  1. Interesting question. First, let's remember what a megapixel is: well, everything is simple here so far – it's one million pixels. What is a pixel in a color camera matrix? This is also not difficult: roughly speaking, this is a point in the mosaic that can perceive the color of the light beam that hits it. And how exactly does a pixel perceive color? In general, when developing matrices, several fundamentally different approaches are used, but we will consider the simplest of them – the closest to our vision in terms of the principle of operation.

    So, with this approach, any pixel-attention-consists of three parts, subpixels, each of which is able to perceive only its own color: red, green or blue. And the entire pixel is a mixture of these colors in a certain proportion (let me remind you that any color we perceive can be obtained by mixing these three light rays). Thus, if the camera's resolution is one megapixel, then its matrix consists of three million alternating subpixels.

    What about our eye? Alas, there is not a single full-fledged pixel in its matrix (retina). But it is full of subpixels, different in sensitivity, the principle of functioning, scattered in complete disarray, and even unevenly. In the role of these subpixels are photoreceptor cells, the same rods and cones that are familiar from school biology lessons. At a sufficient level of illumination, rods perceive the blue part of the spectrum, and cones-there are as many as three types of them-purple-blue, green-yellow and yellow-red. And as for their number (so we got to the numbers), there are about 120 million rods in the retina of one eye, and about 6 million cones.

    So I would say that the resolution of each eye is about 126 meganedopixels. However, I am afraid that for this phrase I would have been challenged to a duel by both biologists and engineers. But you can see for yourself that the answer to this question is unlikely to be any other way. Well, we don't have a normal digital matrix in our eye, so what can we do?

  2. I thought about it and decided that this can be solved by experiment. First, we will not be able to accurately determine the boundaries of clear vision. But the DPI is quite high. I think we need to draw a small chessboard, where each cell is a pixel, and move away from the viewer with it. The moment when the black and white checker becomes a gray field will give us the number of points for a certain distance. Next, you need to decide what area a person can view at the same time without turning his eyes. It's like what kind of oyad you're sitting on in a movie theater. The first one is not very comfortable and the back one may be too small.�

    As for me, the Fuleichdi for the movie is quite enough. Maybe 4k is an overkill.

  3. 130 Mp. Moreover, this resolution is only in the area where the person is looking, but in reality the resolution is (Conditionally) higher. Just as the brain builds a picture based on several frames received from different points in space.

  4. The problem is that the number of pixels doesn't mean much by itself. Here is the camera matrix, in addition to the number of pixels, it also has some linear size.

    The quality of the matrix is evaluated by the size of the sharp print (or image on the monitor) that can be obtained from it. And, at the same time, it should take into account the distance from which this print will be viewed, since the same image may seem blurry up close, and a huge poster hung on the wall of a house may seem sharp.

    In the case of our eyes, there is no imprint, there is some idea of the object that occurs in the brain. Based on this representation, we can draw a print of any size and quality, if we know how to draw )

  5. Roughly speaking, we have 2 megapixels (6 million cones/3 types) of color vision for bright light only, concentrated near the center of the field of view, and about 120 megapixels of detecting (non-color) vision with a much more uniform distribution. The first thing we need is to look at something in detail: at a book, at a berry, at a mammoth; when we use it, our eyes move around, scanning the image. The second thing we need is to notice something on the periphery OR at night, when color vision does not have enough light. At the same time, our brain remembers what color the object was in normal light and automatically tells us that in general, the trees should be green, and the house yellow, and the car … er? In the dark, all cats are gray=)

    The human eye has a resolution determined by the diameter of the pupil and the turbidity of the lens and vitreous body. Normally, we see two glowing points separated if the distance between them is about 3 arc minutes. For example, people without visual defects can easily see the duality of the star Alcor-Mizar even in Moscow (second from the left in the bucket handle of the constellation Ursa Major), but few people see the duality of Epsilon Lyra, which is just separated by about a few arc minutes. (Vega is one of the brightest stars in the summer-autumn sky, and one of the stars in the summer-autumn triangle , the three brightest stars in the southern sky. Vega of them is closest to the zenith in the middle zone and to the west. Below it is a parallelogram – actually, the constellation Lyra. The closest parallelogram star to Vega, Vega itself, and Epsilon Lyra form an equilateral triangle).

    Despite the fact that we are used to high-quality images of the Moon, with the naked eye we see it approximately 10×10 pixels in diameter, noticing only the largest details.

  6. There is no way to calculate this, and attempts to correlate pixels and the number of cones are absolutely meaningless for some simple reasons.

    There is no significance in pixels unless you specify dpi-pixels per inch — or any other similar ratio.

    It should be borne in mind that our vision is a complex mechanism, much more complex than the camera matrix, but also much worse for some objective reasons.

    In short, if we took a picture through our eye apparatus, we would get a giant image, but very useless: the format would be extremely incomprehensible, and part of the image would be blurred or colorless due to the peculiarities of our vision. The pixels themselves would differ in their ratio. You can't frame a photo like this.

    Given all the problems of human vision, it is difficult to even say how many megapixels from a photo card could be cut off to get a high-quality picture, that is, our vision, even if we try to express it in pixels, does not have a certain number of “active” pixels: this number differs for each person due to a whole bunch of factors.

  7. The question is complex and interesting, and when answering it, you need to make a number of assumptions.

    1. A person has stereoscopic vision (i.e. 2 two eyes that combine the image into one whole. Thank you for this brain.)

    2. Our vision is more like a video stream than a photo image.

    3. The human eye is not fixed, it rotates and has a different field of view than the camera.

    Roger N. Clark took into account the properties of human vision and came to the conclusion that the human field of view covers 576 megapixels (provided that it is 120 degrees, which is quite conservative).

    If you focus only on the part that is covered by “one look” at the objects in focus, then as described in this video�on the Vsauce channel, the image size will be significantly smaller – 7 megapixels.

Leave a Reply