Increase max image size (from 2048 x 2048 pixels)

I do crop, but I still want the quality to allow for zooming in for more detail, instead of pixelated :sob:
Adding the original wide view as a second picture gives more info.

4 Likes

We took a look into this recently.

Using different sizes/quality for licensed or non-licensed images was deemed too complicated and would create problems if, say, someone changed their default photo license from a Creative Commons license to all rights reserved.

Currently we use 99% quality for image compression, and I can’t tell the difference between “original” size large images I’ve posted to iNat and 100% 2048px quality jpeg exports from my photo software. If we used 100% quality for our image compression using the current method (not JPG XL), it would increase file size by over 28% with little to no bump in image quality. Increasing image size to 3072px on the log size doubles the image size, using our current method.

We didn’t test this out but we did test out webp compression. An image with 3072 px on the long side would be 9% smaller than the 2048px images we have using our current compression, but the trade-offs are that compression takes 62% longer and webp may not be broadly compatible yet. We’ll keep looking into it.

Note that if we do change our image compression and resizing specs, we wouldn’t run existing images through that - just newly uploaded ones.

10 Likes

On the second issue here, my suggestion would be: Don’t allow users to move images from CC to all rights reserved.

2 Likes

FWIW there’s a jpg encoder that is comparable to webp in quality/size according to comparison tests from a few people and works with existing jpg decoders. https://github.com/mozilla/mozjpeg

2 Likes

Hello,
I would like to advocate the idea to remove the 2048x2048 limit, or possibly to replace it by the UHD limit (i.e. 3840x2160 pixels), along with a weight limit to something like 5MB. Indeed, in my view, to limit storage and bandwidth, and to preserve a maximum of details, it is more important to limit the weights of images rather than their sizes in pixels. With a simple rule, very high fidelity images with a relatively large size (up to UHD, which is probably the screen standard in the near future) can be less than 5MB.

Currently this simple rule is using jpeg compression with quality in the range 90-95%. It would be a pure non-sense to use 100% quality, and even 99% does not really make sense. Images are visually lossless with 95%, or even with 90% (depending on if chroma has been subsampled or not). This results in relatively light files. For example, on the thousands of UHD photos I saved in jpeg 95% without chroma subsampling (for very very high fidelity) only a very small fraction has a weight > 5 MB (none is > 8MB), and could be easily reduced to less than 5 MB through a few cropping. In numerous cases, cropping is indeed a very simple and efficient way to reduce the weight. However, be aware that given the 8x8 block structure of jpeg files, cropping can be quite destructive by generating a number of visual artifacts. Fortunately, there are a number of free viewers (in particular XnView classic, XnViewMP and IrfanView) that make it possible to perform cropping in a lossless way (i.e. to eliminate the 8x8 blocks outside the cropping area to keep the blocks inside without changing them).

In the very next future, I think that the Jpeg-xl format will become the obvious choice. Previous jpeg files can be converted in jpeg-xl in a reversible way (so that it is ever possible to get back to the exact jpeg file if needed). I recently converted all my previous jpeg files in this way, and the weight gain is about 20%. For new photos (I shoot in raw), I now convert them directly in jpeg-xl (with quality 90%, which is enough for a visually lossless image, according to my own tests and of other people) and the weight is then > 50% smaller than the same file saved in jpeg. Obviously previous jpeg files can also be saved (rather than losslessly converted) in jpeg-xl with a weight gain of >50% but in this case, it is not possible to come back to the exact original jpeg images if needed. The current issue with jpeg-xl is that only a few browsers (e.g. chrome) are able to display them, and only a few viewers (e.g. XnViewMP) are able to read and write them. Camera makers also do not yet propose to save the photos directly in jpeg-xl, but saving them in very high quality jpeg and resaving them afterwards in 90% jpeg-xl will result in high-fidelity light files. I hope that Inat will quickly allow users to upload images in this format (currently I have to convert my jpeg-xl files back to jpeg to upload them ;-). Conversion tools and information can be found here: http://jpegxl.info/

By the way, I do not understand why Inat currently “resaves” the uploaded jpeg photos, even those with size less than 2048x2048 (I made some tests and was surprise to see that the so-called “original” image is not exactly the same (although visually indistinguable by an human eye) as the uploaded image when it was less than 2048x2048. This seems to apply to both “standard” jpeg and 'progressive" jpeg images. The later are converted to standard jpeg. This is a real pity because not only progressive jpeg images are a little bit lighter than their standard counterparts, but also and mainly progressive jpeg results in images that can be seen (at lower resolution) before being fully downloaded, so that an identifier with a poor internet link would be able to recognise a species without waiting for the full download when the high resolution (to see tiny details) is not required. Consequently, in my view it would more make sense to convert standard jpeg (and losslessly, i.e. without compression/recompression) in progressive ones rather than the reverse.

thanks for your attention
simben

6 Likes

For what it’s worth, my impression is that something around 3600 × 2400 is the maximum meaningful resolution for most cases until you’re getting into relatively expensive lenses for interchangeable lens cameras (DSLR / mirrorless)… and unless you’re getting into medium format or lenses that cost a few thousand dollars, I doubt resolution over 4500 x 3000 is going to meaningful except in a few anomalous “best case” scenarios.

Agreed. Compared to the current 2048 × 2048 & 99% quality jpg compression, I would definitely opt for more pixels and more compression. Although I’m not sure how the scale used in the Photoshop “save as” dialogue compares with values in % (the Photoshop scale is 1 to 12; I guess they watched This Is Spinal Tap), I’m finding that 3000 × 2000 images at jpg quality “9” are about the same size as 2048 × 1365 images at “11”. I can tell the difference between 3000 × 2000 and 2048 × 1365, but I don’t think I can tell the difference between 9 and 11. (Maybe the difference between 9 and 11 would be noticeable in low-noise images with a lot of smooth gradients, but this is unlikely to affect identifiability.)

1 Like

@ tiwane: I agree with both @ simben and @ aspidoscelis. Increasing the resolution limits to something like 3600x2400 while decreasing the compression from 99% to something in the range of 90-95% would make a significant improvement in the quality of the images without increasing file size or adding noticeable compression effects. As a test, I saved several of my wildlife photos at different sizes/compressions and averaged the results:
3600x2400 90%: 1.3 MB
3600x2400 95%: 2.0 MB
2048x1365 99% (current settings): 1.7 MB
As you can see, the sizes are comparable (or even an improvement) and I preferred the quality of both the 3600x2400 images over that of the 2048x1365 99% images.

4 Likes

What library or utility are you encoding the JPEG with? Wouldn’t this be a factor in what percentage quality is good enough?

2 Likes

That’s probably true. I’m using Affinity Photo, not sure what the underlying library is. Does anyone know what iNaturalist uses? ImageMagick?

2 Likes

Yeah. Ideally someone who’s spent way too much time with photo editing would fine tune the value with whatever method goes into use. I’ve got the “way too much time with photo editing” part but I have no experience with the automated photo processing part.

2048 × 2048 & 99% quality compression seems like it is very unlikely to be the optimum, though.

The official library is called libjpeg, and is used by most (if not all) free viewers (possibly in a “boosted” form called libjpeg-turbo). Some commercial viewers/photo editors also use it, while others use their own system. In this case, when the scale is 0-100%, I guess that the values are more or less equivalent to libjpeg. Some others, like photoshop, have their own system and scale, and it is not so obvious to know the equivalence. In addition, compression level and possible chroma subsampling are defined by distinct parameters with libjepg, whereas in other systems, they are linked (a high compression level necessarily involving chroma subsampling). A useful tool is XnViewMP, which is a free viewer available for various platforms (windows, linux, mac), and able to give (in the file property window) interesting information about jpeg images such as progressive or nor, chroma subsampling and a quality estimation (based on the libjpeg scale)

2 Likes

If people cropped the right way, this would hardly be an issue. You can do 800x800 pics with all the identifiying features visible. On the other hand, you’d have tons of pics from overenthusiastic users in original size … blurry, out of focus, from too far away, identifiying features not depicted, too bright/ dark, false color … but 6000x4000 … do we really need that?

Exactly. And I am afraid that the data quality for many pics wouldn’t improve much by a higher resolution.

I think this is a good idea. On the other hand, there’s so many freeware out there that’d do the job … if people don’t use those tools, I guess they wouldn’t use an iNat-cropper either.

1 Like

Right. An additional issue is that cropping jpeg image can be destructive. As said in a previous post, fortunately, there are some freewares such as XnViewMP (available on various platforms) able to crop in a non-destructive way (i.e. losslessly). People who use their smartphone to take photos and do not want to pass through a computer to upload them to Inat may use “JPEG Cropper -lossless retouch” app. It is very easy to use but (as far I know) only available for android phones (I guess the equivalent app should exist for iphones). However, given the huge resolution of modern sensors, the size and weight of images can remain quite large even after cropping. This is why is important to follow consistent rules. Inat may propose some guidelines in the help system to encourage people to upload their images with the best ratios quality / size / weight …

Only if you’re taking photographs of organisms that don’t have that much morphology to look at. :-) With plants, you’ve got a lot of parts and all of those parts might have hairs and other small details it would be useful to see.

Also, most people new to plants don’t know what parts might be important. You get a lot of photographs like this (pretty, maybe, but not very informative):

And not many like this (much more informative!):

The parts of the plant the observer accidentally included in the picture are often the parts of the plant I’m paying attention to when IDing. If we got observers to all crop their photos more, I think it’s more likely that the rate of observations not identifiable to species would go up than down.

All that aside, we have to work with the photographs observers upload, not the photographs we wish observers had uploaded. iNaturalist has control over how much of the information in the uploaded images is kept vs. discarded.

4 Likes

Only if you photograph something extremely small, collembola nymphs or others of that size, you can’t crop that hard anything bigger than 4 mm. When I id insects I would prefer having those 6000 blurry pics, it’s the way to see something pictured there, if they were blurry and 800px it’s a lost case.

Sure ( I am a botanist). But it is of little help to depict the whole plant 6000x4000. Better to add several closeup photos of stipulae, hair, leaf …

2 Likes

Usually if I get close enough to fill the whole pic with a tiny animal (for me uw like shrimps ect), then the ID traits usually remain visible even if downsized. Of course, downsized AND blurry is too much (respectively too little). But many uncropped pics just give you 99% of irrelavant background. So cropping them instead of downsizing would be the sensible option imo.

2 Likes

If someone uploads a photo of the whole plant at 6000 × 4000, we might well be able to see all those parts well enough at the original resolution, but not at 2048 × 1365. An argument in favor of the observer uploading different materials is irrelevant when we’re deciding whether or not to keep the information in the materials the observer did upload.

2 Likes

(Of course, if we wanted to be very literal, we could meet your 800 × 800 criterion by just cutting a 6000 × 4000 image into a bunch of little squares. :-) I’m not sure why we’d want to, but we could.)

If the observer gives their 2 cents, and crops to show the relevant details
we can also avoid the - What are we looking at here … beetle, flower, whatever.

1 Like