tomhow 7 hours ago

See also:

A Pixel Is Not a Little Square (1995) [pdf] – http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

  • glitchc 6 hours ago

    This is written from a rather narrow perspective (of signal processing) and is clearly wrong in other contexts. For an image sensor designer, gate length, photosensitive area and pixel pitch are all real-valued measurements. That pixels are laid out on a grid simply reflects the ease of constructing electronic circuits this way. Alternative arrangements are possible, Sigma's depth pixels being one commercial example.

    • grandempire 5 hours ago

      Ok, but that’s not what an digital image is. Images are designed to be invariant across camera capture and display hardware. The panel driver should interpret the dsp representation into an appropriate electronic pixel output.

      • glitchc 5 hours ago

        Yeah but the article is about a pixel, which has different meanings. Making blanket statements is not helpful in resolving definitions.

        Truth is, a pixel is both a sample and a transducer. And in transduction, a pixel is both an integrator and an emitter.

        • grandempire 4 hours ago

          I’ll quote my other comment:

          > If you are looking to understand how your operating system will display images, or how your graphics drivers work, or how photoshop will edit them, or what digital cameras aim to produce, then it’s the point sample definition.

          • tobr 4 hours ago

            Sometimes yes. Sometimes no! There are certainly situations where a pixel will be scaled, displayed, edited or otherwise treated as a little square.

            • seba_dos1 2 hours ago

              A little rectangle even.

              • grandempire 36 minutes ago

                What is a little rectangle? Physical display pixels are separate RGB leds with a non-rectangular shape.

                • Dylan16807 4 minutes ago

                  They're rectangles on my monitor.

      • fluoridation 5 hours ago

        Well, "what a digital image is" is a sequence of numbers. There's no single correct way to interpret the numbers, it depends on what you want to accomplish. If your digital image is a representation of, say, the dead components in an array of sensors, the signal processing theoretic interpretation of samples may not be useful as far as figuring out which sensors you should replace.

        • grandempire 5 hours ago

          > There's no single correct way to interpret the numbers

          They are just bits in a computer. But there is a correct way of to interpret them in a particular context. For example 32 bits can be meaningless - or it can have an interpretation as a twos complement integer which is well defined.

          If you are looking to understand how an operating system will display images, or how graphics drivers work, or how photoshop will edit them, or what digital cameras produce, then it’s the point sample definition.

      • kjkjadksj 2 hours ago

        Well the camera sensor captures a greater dynamic range than the display or print media or perhaps even your eyes, so something has to give. If you ever worked with a linear file without gamma correction you will understand what I mean.

  • calibas 5 hours ago

    If you want a good example of what happens when you treat pixels like they're just little squares, disable font smoothing. Anti-aliasing, fonts that look good, and smooth animation are all dependent upon subpixel rending.

    https://en.wikipedia.org/wiki/Subpixel_rendering

    Edit: For the record, I'm on Win 10 with a 1440p monitor and disabling font smoothing makes a very noticeable difference.

    People are acting like this is some issue that no longer exists, and you don't have to be concerned with subpixel rendering anymore. That's not true, and highlights a bias that's very prevalent here on HN. Just because I have a fancy retina display doesn't mean the average user does. If you pretend like subpixel rendering is no longer a concern, you can run into situations where fonts look great on your end, but an ugly jagged mess for your average user.

    And you can tell who the Apple users are because they believe all this went away years ago.

    • kccqzy 5 hours ago

      This might have been a good example fifteen years ago. These days with high-DPI displays you can't perceive a difference between font smoothing being turned on and off. On macOS for example font smoothing adds some faux bold to the fonts, and it's long been recommended to turn it off. See for example the influential article https://tonsky.me/blog/monitors/ which explains that font smoothing used to do subpixel antialiasing, but the whole feature was removed in 2018. It also explains that this checkbox doesn't even control regular grayscale antialiasing, and I'm guessing it's because downscaling a rendered @2x framebuffer down to the physical resolution inherently introduces antialiasing.

      • calibas 4 hours ago

        Maybe true for Mac users, but the average Win 10 desktop is still using a 1080p monitor.

        • hulium 2 hours ago

          The users may have 1080p monitors, but even Windows does not do subpixel antialiasing in its new apps (UWP/WinUI) anymore. On Linux, GTK4 does not do subpixel antialiasing anymore.

          The reason is mostly that it is too hard to make it work under transformations and compositing, while higher resolution screens are a better solution for anyone who cares enough.

          • calibas an hour ago

            > even Windows does not do subpixel antialiasing in its new apps (UWP/WinUI) anymore

            This is a little misleading, as the new versions of Edge and Windows Terminal do use subpixel antialiasing.

            What Microsoft did was remove the feature on a system level, and leave implementation up to individual apps.

          • verall 15 minutes ago

            Is this why font rendering on my Win11 laptop looks like shit on my 1440p external monitor?

            Laptop screen is 4k with 200% scaling.

            Seriously the font rendering in certain areas (i.e. windows notification panel) is actually dogshit. If I turn off the 200% scaling on the laptop screen then reboot it looks correct again.

    • dahart 3 hours ago

      You’re conflating different topics. LCD subpixel rendering and font smoothing is often implemented by treating the subpixels as little rectangles, which is the same mistake as treating pixels as squares.

      Anti-aliasing can be and is done on squares routinely. It’s called ‘greyscale antialiasing’ to differentiate from LCD subpixel antialiasing, but the name is confusing since it works and is most often used on colors.

      The problem Alvy-Ray is talking about is far more subtle. You can do anti-aliasing with little squares, but the result isn’t 100% correct and is not the best result possible no matter how many samples you take. What he’s really referring to is what signal processing people call a box filter, versus something better like a sinc or Gaussian or Mitchell filter.

      Regarding your edit, on a high DPI display there’s very little practical difference bewteen LCD subpixel antialiasing and ‘greyscale’ (color) antialiasing. You don’t need LCD subpixels to get effective antialiasing, and you can get mostly effective antialiasing with square shaped pixels.

    • wat10000 5 hours ago

      I don't think that's true anymore. Modern high-resolution displays have pixels small enough that they don't really benefit from sub-pixel rendering, and logical pixels have become decoupled from physical pixels to the point of making sub-pixel rendering a lot more difficult.

      • 7jjjjjjj 4 hours ago

        It's still true for everyone who doesn't have a high DPI monitor. We're talking about a feature that doubles the price of the display for little practical value. It's not universal, and won't be for a long time.

  • nayuki 3 hours ago

    Agreed. The fact that a pixel is an infinitely small point sample - and not a square with area - is something that Monty explained in his demo too: https://youtu.be/cIQ9IXSUzuM?t=484

    • dahart 2 hours ago

      Eh, calling it infinitely small is at least as misleading as calling it a square. While they are both mostly correct, neither Monty’s explanation nor Alvy-Rays are all that good. Pixels are samples taken at a specific point, but pixel values do represent area one way or another. Often they are not squares, but on the other hand LCD pixels are pretty square-ish. Camera pixels are integrals over the sensor area, which captures an integral over a solid angle. Pixels don’t have a standard shape, it depends on what capture or display device we’re talking about, but no physical capture or display devices have infinitely small elements.

      • wtallis 2 hours ago

        Camera pixels represent an area, but pixels coming out of a 3D game engine usually represent a point sample. Hand-drawn 2d pixel art is explicitly treating pixels as squares. All of these are valid uses that must coexist on the same computer.

      • kjkjadksj 2 hours ago

        Some early variants of sony a7 had fewer but larger pixels to improve light gathering at high iso.

    • jansan 2 hours ago

      Signal processing engineers against the rest of the world.

Laremere 11 hours ago

I'd say it's better to call it a unit of counting.

If I have a bin of apples, and I say it's 5 apples wide, and 4 apples tall, then you'd say I have 20 apples, not 20 apples squared.

It's common to specify a length by a count of items passed along that length. Eg, a city block is a ~square on the ground bounded by roads. Yet if you're traveling in a city, you might say "I walked 5 blocks." This is a linguistic shortcut, skipping implied information. If you're trying to talk about both in a unclear context, additional words to clarify are required to sufficiently convey the information, that's just how language words.

  • gnfargbl 10 hours ago

    Exactly. Pixels are indivisible quanta, not units of any kind of distance. Saying pixel^2 makes as much sense as counting the number of atoms on the surface of a metal and calling it atoms^2.

    • chii 4 hours ago

      So how does subpixels come into play under this idea of quanta?

      • starkparker 3 hours ago

        Pixels then become containers and subpixels become quantfiable entities within each pixel. In the apple analogy, each crate contains three countable apples and you can count both the crates and the apples independently.

        This idea itself breaks down when we get to triangular subpixel rendering, which spans pixels and divides subpixels. But it's also a minor form of optical illusion, so making sense of it is inherently fraught.

        Maybe a pixel is just a pixel.

      • masklinn 3 hours ago

        Quarks? They’re sub-units of hadrons but iirc they can’t be found on their own.

      • n0n0n4t0r 4 hours ago

        I think we'll need to use some maths from qcd lattice !

  • _ph_ 3 hours ago

    That is exactly how it is and it makes the whole article completely pointless. Especially as the article in the second sentence correctly writes "1920 pixels wide".

  • petesergeant 10 hours ago

    Is it that, or is it a compound unit that has a defined width and height already? Something can be five football fields long by two football fields wide, for an area of ten football fields.

    • timerol 6 hours ago

      This example illustrates potential confusion around non-square pixels. 5 football fields long makes perfect sense, but I'm not sure if 2 football fields wide means "twice the width of a football field" or "width equaling twice the length of a football field". I would lean towards the latter in colloquial usage, which means that the area is definitely not the same as the area of 10 football fields

      • pests 2 hours ago

        I would lean towards the former. I really don't think people are trying to compare the width to the length when discussing football fields casually.

        If I told you parking spots are about two bowling lane's wide... I'm obviously not trying to say they are 120ft wide.

    • jeltz 10 hours ago

      No, it is a count. Pixels can have different sizes and shapes, just like apples. Technically football fields vary slightly too but not close to as much as apples or pixels.

      • vikingerik 6 hours ago

        Football fields also have the fun property of varying in the third dimension. They're built with a crown in the middle so that water will drain off towards the edges, and that can vary significantly between instances.

        And pixels are even starting to vary in the third dimension too, with the various curved and bendable and foldable displays.

        • robotresearcher 4 hours ago

          Pixels used to be realized non-flat in the CRT days.

      • HelloNurse 6 hours ago

        Pixel counts generally represent areas by taking the number of pixels inside a region of the plane, but they can represent lengths by taking the number of pixels inside a certain extent of a single line or column of the grid: it is, actually, a thin rectangle.

      • petesergeant 10 hours ago

        What's the standard size of a city block, the other countable example given by the original author?

        • jeltz 9 hours ago

          Yes, city blocks are like pixels or apples. They do not have a standard size or shape.

          Edit: To clarify, if someone says 3 blocks that could vary by like a factor of like 3 or in extreme caesx more so when used as a unit of length it is a very rough estimate. It is usually used in my country as a way to know when you have reached your destination.

  • solardev 5 hours ago

    > If I have a bin of apples, and I say it's 5 apples wide, and 4 apples tall

    ...then you have a terrible bin for apple storage and should consider investing in a basket ;)

    • pphysch 4 hours ago

      If you don't care about bruising

RedNifre 5 hours ago

What a perplexing article.

Isn't a pixel clearly specified as a picture element? Isn't the usage as a length unit just as colloquial as "It's five cars long", which is just a simplified way of saying "It is as long as the length of a car times five", where "car" and "length of car" are very clearly completely separate things?

> The other awkward approach is to insist that the pixel is a unit of length

Please don't. If you want a unit of length that works well with pixels, you can use Android's "dp" concept instead, which are "density independent pixels" (kinda a bad name if you think about it) and are indeed a unit of length, namely 1dp = 158.75 micro meter, so that you have 160 dp to the inch. Then you can say "It's 10dp by 5dp, so 50 square dp in area.".

  • munk-a 38 minutes ago

    Another colloquial saying to back this up is that "Oh, that house is five acres down the road" or, for a non-standard unit, "The store is three blocks away". We often use area measurements for length if it's convenient.

    The pixel is a unit of area - we just occasionally use units of area to measure length.

  • tshaddox 3 hours ago

    Yeah, this isn't really that complicated. It's just colloquial usage, not rigorous dimensional analysis. Roughly no one is actually confused by either usage ("1920 by 1080" or "12 megapixels").

    It's nearly identical to North American usage of "block" (as in "city block"). Merriam Webster lists these two definitions (among many others):

    > 6 a (1): a usually rectangular space (as in a city) enclosed by streets and occupied by or intended for buildings

    > 6 a (2): the distance along one of the sides of such a block

jefftk 2 hours ago

This isn't just pixels, it's the normal way we use rectangular units in common speech:

* A small city might be ten blocks by eight blocks, and we could also say the whole city is eighty blocks.

* A room might by 13 tiles by 15 tiles, or 295 tiles total.

* On graph paper you can draw a rectangle that's three squares by five squares, or 15 squares total.

otikik 5 hours ago

Well the way I see them I don't think they are a unit at all.

And the end pixels are "physical things". Like ceramic tiles on a bathroom wall.

Your wall might be however many meters in length and you might need however squared meters of tile in order to cover it. But still, if you need 10 tiles high and 20 tiles width, you need 200 tiles to cover it. No tension there.

Now you might argue that pixels in a scaled game don't correspond with physical objects in the screen any more. That's ok. A picture of the bathroom wall will look smaller than the wall itself. Or bigger, if you hold it next to your face. It's still 10x20=200 tiles.

justin_ 10 hours ago

> A Pixel Is Not A Little Square!

> This is an issue that strikes right at the root of correct image (sprite) computing and the ability to correctly integrate (converge) the discrete and the continuous. The little square model is simply incorrect. It harms. It gets in the way. If you find yourself thinking that a pixel is a little square, please read this paper.

> A pixel is a point sample. It exists only at a point. For a color picture, a pixel might actually contain three samples, one for each primary color contributing to the picture at the sampling point. We can still think of this as a point sample of a color. But we cannot think of a pixel as a square—or anything other than a point.

Alvy Ray Smith, 1995 http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf

  • Veedrac 9 hours ago

    A pixel is simply not a point sample. A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas. A modern display does not reconstruct an image the way a DAC reconstructs sounds, they render little rectangles of light, generally with visible XY edges.

    The paper's claim applies at least somewhat sensibly to CRTs, but one mustn't imagine the voltage interpolation and shadow masking a CRT does corresponds meaningfully to how modern displays work... and even for CRTs it was never actually correct to claim that pixels were point samples.

    It is pretty reasonable in the modern day to say that an idealized pixel is a little square. A lot of graphics operates under this simplifying assumption, and it works better than most things in practice.

    • justin_ 9 hours ago

      > A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas.

      Integrates this information into what? :)

      > A modern display does not reconstruct an image the way a DAC reconstructs sounds

      Sure, but some software may apply resampling over the original signal for the purposes of upscaling, for example. "Pixels as samples" makes more sense in that context.

      > It is pretty reasonable in the modern day to say that an idealized pixel is a little square.

      I do agree with this actually. A "pixel" in popular terminology is a rectangular subdivision of an image, leading us right back to TFA. The term "pixel art" makes sense with this definition.

      Perhaps we need better names for these things. Is the "pixel" the name for the sample, or is it the name of the square-ish thing that you reconstruct from image data when you're ready to send to a display?

      • echoangle 9 hours ago

        > Integrates this information into what? :)

        Into electric charge? I don’t understand the question, and it sounds like the question is supposed to lead readers somewhere.

        The camera integrates incoming light into a tiny square into an electric charge and then reads out the charge (at least for a CCD), giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel. So it’s a measurement over the tiny square, not a point sample.

        • justin_ 8 hours ago

          > The camera integrates incoming light into a tiny square [...] giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel

          This is where I was trying to go. The pixel, the result at the end of all that, is the single value (which may be a color with multiple components, sure). The physical reality of the sensor having an area and generating a charge is not relevant to the signal processing that happens after that. For Smith, he's saying that this sample is best understood as a point, rather than a rectangle. This makes more sense for Smith, who was working in image processing within software, unrelated to displays and sensors.

          • echoangle 8 hours ago

            It’s a single value, but it’s an integral over the square, not a point sample. If a shine a perfectly focused laser very close to the corner of one sensor pixel, I’ll still get a brightness value for the pixel. If it were a point sample, only the brightness at a single point would give an output.

            And depending on your application, you absolutely need to account for sensor properties like pixel pitch and color filter array. It affects moire pattern behavior and creates some artifacts.

            I’m not saying you can’t think of a pixel as a point sample, but correcting other people who say it’s a little square is just wrong.

            • grandempire 5 hours ago

              > It affects moire pattern behavior and creates some artifacts.

              Yes. The spacing between point samples determines the frequency, a fundamental characteristic of a dsp signal.

          • glitchc 6 hours ago

            It's never a point source. Light is integrated over a finite area to form a singke color sample. During Bayer mosaicking, contributions from neighbouring pixels are integrated to form samples of complementary color channels.

            • justin_ 6 hours ago

              > Light is integrated over a finite area to form a singke color sample. During Bayer mosaicking, contributions from neighbouring pixels are integrated to form samples of complementary color channels.

              Integrated into a single color sample indeed. After all the integration, mosaicking, and filtering, a single sample is calculated. That’s the pixel. I think that’s where the confusion is coming from. To Smith, the “pixel” is the sample that lives in the computer.

              The actual realization of the image sensors and their filters are not encoded in a typical image format, nor used in a typical high level image processing pipelines. For abstract representations of images, the “pixel” abstraction is used.

              The initial reply to this chain focused on how camera sensors capture information about light, and yes, those sensors take up space and operate over time. But the pixel, the piece of data in the computer, is just a point among many.

              • SAI_Peregrinus 2 hours ago

                IMO Smith misapplied the term "pixel" to mean "sample". A pixel is a physical object, a sample is a logical value that corresponds in some way to the state of the physical object (either as an input from a sensor pixel or an output to a display pixel) but is also used in signal processing. Samples aren't squares, pixels (usually) are.

              • glitchc 5 hours ago

                > Integrated into a single color sample indeed. After all the integration, mosaicking, and filtering, a single sample is calculated. That’s the pixel. I think that’s where the confusion is coming from. To Smith, the “pixel” is the sample that lives in the computer.

                > But the pixel, the piece of data in the computer, is just a point among many.

                Sure, but saying that this is the pixel, and negating all other forms as not "true" pixels is arbitrary. The real-valued physical pixels (including printer dots) are equally valid forms of pixels. If anything, it would be impossible for humans to sense the pixels without interacting with the real-valued forms.

    • jeremyscanvic 8 hours ago

      A slightly tangential comment: integrating a continuous image on squares paving the image plane might be best viewed as applying a box filter to the continuous image, resulting in another continuous image, then sampling it point-wise at the center of each square.

      It turns out that when you view things that way, pixels as points continues to make sense.

    • toxik 9 hours ago

      The representation of pixels on the screen is not necessarily normative for the definition of the pixel. Indeed, since different display devices use different representations as you point out, it can't really be. You have to look at the source of the information. Is it a hit mask for a game? Then they are squares. Is it a heatmap of some analytical function? Then they are points. And so on.

    • gugagore 9 hours ago

      DACs do a zero-order hold, which is equivalent to a pixel as a square.

  • toxik 10 hours ago

    This is one of my favorite articles. Although I think you can define for yourself what your pixels are, for most it is a point sample.

blenderob 9 hours ago

The article starts out with an assertion right in the title and does not do enough to justify it. The title is just wrong. Saying pixels are like metres is like saying metres are like apples.

When you multiply 3 meter by 4 meter, you do not get 12 meters. You get 12 meter squared. Because "meter" is not a discrete object. It's a measurement.

When you have points A, B, C. And you create 3 new "copies" of those points (by geometric manipulation like translating or rotating vectors to those points), you now have 12 points: A, B, C, A1, B1, C1, A2, B2, C2, A3, B3, C3. You don't get "12 points squared". (What would that even mean?) Because points are discrete objects.

When you have 3 apples in a row and you add 3 more such rows, you get 4 rows of 3 apples each. You now have 12 apples. You don't have "12 apples squared". Because apples are discrete objects.

When you have 3 pixels in a row and you add 3 more such rows of pixels, you get 4 rows of 3 pixels each. You now have 12 pixels. You don't get "12 pixels squared". Because pixels are discrete objects.

Pixels are like points and apples. Pixels are not like metres.

  • Izkata 6 hours ago

    > When you multiply 3 meter by 4 meter, you do not get 12 meters. You get 12 meter squared.

    "12 meter(s) squared" sounds like a square that is 12 meters on each side. On the other hand, "12 square meters" avoids this weirdness by sounding like 12 squares that are one meter on each side, which the area you're actually describing.

    • chii 4 hours ago

      that's just a quirk of the language.

      If you use formal notation, 12 m^2 is very clear. But i have yet to see anyone write 12px^2

      • Izkata 2 hours ago

        It's one that really bothers me because of the unnecessary confusion it adds.

        As for the rest, see GGP's argument. px^2 doesn't make logical sense. When people are use pixels as length, it's in the same way as "I live 2 houses over" - taking a 2D or 3D object and using one of its dimensions as length/distance.

ChrisMarshallNY 10 hours ago

A pixel is a dot. The size and shape of the dot is implementation-dependent.

The dot may be physically small, or physically large, and it may even be non-square (I used to work for a camera company that had non-square pixels in one of its earlier DSLRs, and Bayer-format sensors can be thought of as “non-square”), so saying a pixel is a certain size, as a general measure across implementations, doesn’t really make sense.

In iOS and MacOS, we use “display units,” which can be pixels, or groups of pixels. The ratio usually changes, from device to device.

webstrand 2 hours ago

This kind of thing is common in english, though. "an aircraft carrier is 3.5 football fields long"

The critical distinction is the inclusion of a length dimension in the measurement: "1920 pixels wide", "3 mount everests tall", "3.5 football fields long", etc.

ttoinou 10 hours ago

    But it does highlight that the common terminology is imperfect and breaks the regularity that scientists come to expect when working with physical units in calculations
Scientists and engineers dont actually expect much, they make a lot of mistakes, are not very rigorous, not demanding towards each others. It is common for Units to be wrong, context defined, socially dependent and even sometimes added together when the operator + hasn't been properly defined
jmull 4 hours ago

Hopefully most people get that the exact meaning of "pixel" depends on context?

It certainly doesn't make sense to mix different meanings in a mathematical sense.

E.g., when referring to a width in pixels, the unit is pixel widths. We shorten it and just say pixels because it's awkward and redundant to say something like "the screen has a width of 1280 pixel widths", and the meaning is clear to the great majority of readers.

gilgoomesh 7 hours ago

A pixel is two dimensional, by definition. It is a unit of area. Even in the signal processing "sampling" definition of a pixel, it still has an areal density an is therefore still two-dimensional.

The problem in this article is it incorrectly assumes a pixel to be a length and then makes nonsensical statements. The correct way to interpret "1920 pixels wide" is "the same width as 1920 pixels arranged in a 1920 by 1 row".

In the same way that "square feet" means "feet^2" as "square" acts as a square operator on "feet", in "pixels wide" the word "wide" acts as a square root operator on the area and means "pixels^(-2)" (which doesn't otherwise have a name).

  • danbruc 6 hours ago

    It is neither a unit of length nor area, it is just a count, a pixel - ignoring the CSS pixel - has no inherent length or area. To get from the number of pixels to a length or area, you need the pixel density. 1920 pixel divided by 300 pixel per inch gives you the length of 6.4 inch and it all is dimensionally consistent. The same for 18 mega pixel, with a density of 300 times 300 pixel per square inch you get an image area of 200 square inch. Here pixel per inch times pixel per inch becomes pixel per square inch, not square pixel per square inch.

  • jillesvangurp 6 hours ago

    CSS got it right by making pixels a relative unit. Meters are absolute. You cannot express pixels in meters. Because they are relative units.

    If you have a high resolution screen the a CSS pixel is typically be 4 actual display pixels (2x2) instead of just 1. And if you change the zoom level, the amount of display pixels might actually change in fractional ways. The unit only makes sense in relation to what's around it. If you render vector graphics or fonts, pixels are used as relative units. On a high resolution screen it will actually use those extra display pixels.

    If you want to show something that's exactly 5cm on a laptop or phone screen, you need to know the dimensions of the screen and figure out how many pixels you need per cm to scale things correctly. Css has some absolute units but they only work as expected for print media typically.

  • hinkley 3 hours ago

    Same as if you were building a sidewalk and you wanted to figure out its dimensions, you’d base it off the size of the pavers. Because half pavers are a pain and there are no half pixels.

  • chii 4 hours ago

    > The correct way to interpret "1920 pixels wide" is "the same width as 1920 pixels arranged in a 1920 by 1 row".

    But to be contrarian, the digital camera world always markets how many megapixels a camera has. So in essense, there are situations where pixels are assumed to be an area, rather than a single row of X pixels wide.

    • SAI_Peregrinus an hour ago

      The digital camera world also advertises the sensor size. So a 24MP APS-C camera has smaller pixels than a 24MP Full-frame camera, for example.

  • bnegreve 6 hours ago

    > in "pixels wide" the word "wide" acts as a square root operator on the area and means "pixels^(-2)"

    Did you meant "pixels^(1/2)"? I'm not sure what kind of units pixels^(-2) would be.

    • nayuki 3 hours ago

      pixel^(-2) is "per squared pixel". Analogously, 1 pascal = 1 newton / 1 metre^2. (Pressure is force per squared length.)

  • grandempire 5 hours ago

    > A pixel is two dimensional, by definition.

    A pixel is a point sample by definition.

    • SAI_Peregrinus an hour ago

      An odd definition. A pixel is a physical object, a picture element in a display or sensor. The value of a pixel at a given time is a sample, but the sample isn't the pixel.

      • grandempire 42 minutes ago

        Definitions of technical things you aren’t familiar with tend to be odd.

        You are referring to a physical piece of a display panel. A representation of an image in software is a different thing. Hardware and software transforms the dsp signal of an image into voltages to drive the physical pixel. That process takes into account physical characteristics like dimensions.

        Oh btw physical pixels aren’t even square and each RGB channel is a separate size and shape.

Sharlin 11 hours ago

Pixel, used as a unit of horizontal or vertical resolution, typically implies the resolution of the other axis as well, at least up to common aspect ratios. We used to say 640x480 or 1280x1024 – now we might say 1080p or 2.5K but what we mean is 1920x1080 and 2560x1440, so "pixel" does appear to be a measure of area. Except of course it's not – it's a unit of a dimensionless quantity that measures the amount of something, like the mole. Still, a "quadratic count" is in some sense a quantity distinct from "linear count", just like angles and solid angles are distinct even though both are dimensionless quantities.

The issue is muddied by the fact that what people mostly care about is either the linear pixel count or pixel pitch, the distance between two neighboring pixels (or perhaps rather its reciprocal, pixels per unit length). Further confounding is that technically, resolution is a measure of angular separation, and to convert pixel pitch to resolution you need to know the viewing distance.

Digital camera manufacturers at some point started using megapixels (around the point that sensor resolutions rose above 1 MP), presumably because big numbers are better marketing. Then there's the fact that camera screen and electronic viewfinder resolutions are given in subpixels, presumably again for marketing reasons.

  • HPsquared 11 hours ago

    Digital photography then takes us on to subpixels, Bayer filters (https://en.wikipedia.org/wiki/Color_filter_array) and so on. You can also divide the luminance colour parts out. Most image and video compression puts more emphasis on the luminance profile, getting the colour more approximate. The subpixels on a digital camera (or a display for that matter) take advantage of this quirk of human vision.

knallfrosch 11 hours ago

Happens to all square shapes.

A chessboard is 8 tiles wide and 8 tiles long, so it consists of 64 tiles covering an area of, well, 64 tiles.

  • HPsquared 11 hours ago

    Not all pixels are square, though! Does anyone remember anamorphic DVDs? https://en.wikipedia.org/wiki/Anamorphic_widescreen

    • nayuki 3 hours ago

      Never mind anamorphic DVDs, all of them use non-square pixels. The resolution of DVD is 720×480 pixels (or squared pixels, referring back to the article); this is a 3:2 ratio of pixel quantities on the horizontal vs. vertical axes. But the overall aspect ratio of the image is displayed as either 4:3 (SDTV) or 16:9 (HDTV), neither of which matches 3:2. Hence the pixel aspect ratio is definitely not 1:1.

  • pvdebbe 11 hours ago

    City blocks, too.

    • Aldipower 10 hours ago

      In the US...

      • matthewowen 6 hours ago

        Do people in Spanish cities with strong grids (eg Barcelona) not also use the local language equivalent of "blocks" as a term? I would be surprised if not. It's a fundamentally convenient term in any area that has a repeated grid.

        The fact that some cities don't have repeated grids and hence don't use the term is not really a valuable corrective to the post you are replying to.

        • mzs 2 hours ago

          In Slavic languages we think in terms of intersections for distance, maybe the same for Spanish? Area is thought of either as inside district (say city enter) or in meters squared.

          • SAI_Peregrinus an hour ago

            A block is just the distance from one intersection to the next. Even if those distances vary or are non-square.

            E.g. Manhattan has mostly rectangular blocks, if you go from 8th Avenue to Madison Avenue along 39th St you traveled 4 blocks (the last of which is shorter than the first 3), if you go from 36th St to 40th St along 8th Avenue you traveled 4 blocks (all of which are shorter than the blocks between the avenues).

      • jeltz 10 hours ago

        While it is certainly more common in the US we occasionally use blocks as a measurement here in Sweden too. Blocks are just smaller and less regular here.

_wire_ 3 hours ago

This article wastes readers' time by pretending to command of a subject in a manner that is authoritative only in its uncertainty.

Pixel is an abbreviation for 'picture element' which describes a unit of electronic image representation. To understand it, consider picture elements in the following context...

(Insert X different ways of thinking about pictures and their elements.)

If there is a need for a jargon of mathematical "dimensionality" for any of these ways of thinking, please discuss it in such context.

Next up:

<i>A musical note is a unit of...</i>

tuzemec 2 hours ago

Also a measurement of life. Back in the 320x200 game days, when playing something with a health bar, we used to joke that someone had one pixel of life left when near death.

ivan_gammel 9 hours ago

So, the author answers the question:

> That means the pixel is a dimensionless unit that is just another name for 1, kind of like how the radian is length divided by length so it also equals one, and the steradian is area divided by area which also equals one.

But then for some reason decides to ignore it. I don’t understand this article. Yes, pixels are dimensionless units used for counting, not measuring. Their shape and internal structure is irrelevant (even subpixel rendering doesn’t actually deal with fractions - it alters neighbors to produce the effect).

teknopaul 2 hours ago

The pixel ain't no problem.

A "megapixel" is simply defined as 1024 pixels squared ish.

There is no kilopixel. Or exapixel.

No-one doesn't understand this?

GuB-42 10 hours ago

A pixel is neither a unit of length nor area, it is like a byte, a unit of information.

Sometimes, it is used as a length or area, omitting a conversion constant, but we do it all the times, the article gives out the mass vs force as an example.

Also worth mentioning that pixels are not always square. For example, the once popular 320x200 resolution have pixels taller than they are wide.

  • nayuki 3 hours ago

    It's not a unit of information. How many bytes does a 320×240 image take? You don't know until you specify the pixel bit depth in bpp (bits per pixel).

alzamixer an hour ago

Should be pixel as area and pixel-length as 1-dimensional unit.

So an image could be 1 mega pixel, or 1000 times 1000 pixel-lengths.

rbanffy 5 hours ago

For those who programmed 8-bit computers or worked with analog video, a pixel is also a unit of time. An image is a long line with some interruptions.

fennecbutt 5 hours ago

Or perhaps it's multivariate and there's no point in trying to squish all the nuance into a single solid definition.

Aldipower 10 hours ago

A Pixel is a telephone.

anitil 12 hours ago

This is a fun post by Nayuki - I'd never given this much thought, but this takes the premise and runs with it

scotty79 3 hours ago

Pixel is just contextual. When you are talking about one dimensional things it's a unit of length. In all mother cases it's a unit of area.

bitwize 5 hours ago

Pixels are not measurement units. They're samples of an image taken a certain distance apart. It's like eggs in a carton: it's perfectly legitimate to say that a carton is 6 eggs long and 3 eggs wide, and holds a total of 18 eggs, because eggs are counted, they're not a length measure except in the crudest sense.

forrestthewoods 11 hours ago

I’m surprised the author didn’t dig into the fact that not all pixels are square. Or that pixels are made of underlying RGB light emitters. And that those RGB emitters are often very non-square. And often not 1:1 RGBEmitter-to-Pixel (stupid pentile).

  • nayuki 3 hours ago

    Or the fact that a 1 megapixel camera (counting each color-filtered sensing element as a pixel) generates less information than a 1 megapixel monitor (counting each RGB triad as a pixel) can display.

  • petesergeant 10 hours ago

    > "Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte."

    or

    > "I have made this longer than usual because I have not had time to make it shorter."

jrvieira 7 hours ago

The author is very confused.

surfingdino 11 hours ago

A pixel is a sample or a collection of values of the Red, Green, and Blue components of light captured at a particular location in a typically rectangular area. Pixels have no physical dimensions. A camera sensor has no pixels, it has photosites (four colour sensitive elements per one rectangular area).

  • echoangle 9 hours ago

    And what’s the difference between a photosite and a pixel? Sounds like a difference made up to correct other people.

    • surfingdino 8 hours ago

      A photosite is a set of four photosensitive electronic sensors that register levels of RGB components of light https://www.cambridgeincolour.com/tutorials/camera-sensors.h... The camera sensor turns data captured by a a single photosite into a single data structure (a pixel), a tuple of as many discreet values as there are components in a given colour space (three for RGB).

      • anfractuosity 5 hours ago

        I didn't think a single photosite was directly converted to a single pixel, there's quite a number of different demosaicing algorithms.

        Edit: Upon doing some more reading it sounds like a photosite or sensel, isn't a group of sensors, but a single sensor, which can pick up r,g,b,.. light - "each individual photosite, remember, records only one colour – red, green or blue" - https://www.canon-europe.com/pro/infobank/image-sensors-expl...

        I couldn't seem to find a particular name for the RGGB/.. pattern that a bayer filter consists of an array of.

      • echoangle 8 hours ago

        If you want to be pedantic, you shouldn’t say that the photosite has 4 sensors, depending on the color filter array you can have other numbers like 9 or 36, too.

        And the difference is pure pedantry, because each photosite corresponds to a pixel in the image (unless we’re talking about lens correction?). It’s like making up a new word for monitor pixels because those are little lights (for OLED) while the pixel is just a tuple of numbers. I don’t see why calling the sensor grid items „pixels“ is misunderstandable in any way.

        • surfingdino 7 hours ago

          You are right about the differences in the number of sensors, there may be more. I prefer to talk about photosites, because additional properties like photosite size or sensor photosite density help me make better decisions when I'm selecting cameras/sensors for a photo project. For example, a 24MP M43 sensor is not the same as a 24MP APS-C or FF sensor, even though the image files they produce have the same number of pixels. Similarly, a 36MP FF sensor is essentially the same a 24MP APS-C sensor, it produces image files that contain more pixels from a wider field of view, but the resolution of the sensor stays the same, because both sensors have the same photosite density (if you pair the same lens with both sensors).

  • danwills 10 hours ago

    Is a pixel not a pixel when it's in a different color space? (HSV, XYZ etc?)

    • surfingdino 9 hours ago

      RGB is the most common colour space, but yes, other colour spaces are available.

dullcrisp 7 hours ago

Wait till they hear about fluid ounces.