Increase max image size (from 2048 x 2048 pixels)

Even if Inat removed the 2048x2048 limit (what I hope), there will necessarily be some limit in terms of size and/or weight to minimize storage and limited bandwidth issues. This means that larger and/or heavier images will have to be processed by Inat in some way. For now, Inat relies on automatic resizing, which is probably the best option. There are potentially other means such as increasing compression and/or cropping (and possibly chroma subsampling if not already done). Automatic cropping is quite dangerous, however (it would need a very smart deep learning system to do the job properly) . Furthermore, for jpeg images, too much compressing generates annoying artifacts (an issue that can be partly alleviated by using jpeg-xl, which can compress much more without generating jpeg artifacts, i.e. visually losslessly). Consequently, whatever the limit eventually used by Inat (the current one or a new one), it would be fine for the whole community if observers may think about uploading better photos in terms of ratio content/weight when possible, i.e. photos cropped enough (if needed) to eliminate non-informational background. To encourage observers in this way, it could be useful to send them a warning message when they attempt to upload a too large or heavy image, e.g. “huge image that will be automatically resized, involving loss of details; please think about cropping it – see tutorial”, and of course, to provide some guidelines / best practice rules about this in the Inat tutorial.

3 Likes

Yes, something like that.. “enhance… enhance…”

I think it’s due in part to a lack of experience and also sub-par hardware, lots of folks, especially students in Ecuador have laughably bad cameras in their phones and will shoot photos of birds perched in a tree with a 5-10mm focal length from 25 meters at 5-10MP and expect someone to be able to ID it, I can barely differentiate Turdus sp. from these shots let alone anything smaller than that.

From my own experience, tons of folks here use cheap Samsung A series phones (or worse), the cameras even on the higher end models are embarrassing and that’s a best case scenario, and also many folks are green to using mobile data, phone cameras and also iNat mobile, this is where folks will accidentally save their home location on google maps, publicly, nothing surprises me anymore. I do understand how the contrast (no pun intended) can be very stark in comparison to the extreme levels of fidelity and contrast expected by western audiences, I think folks just don’t notice any issue and simply upload and move on.

I REALLY have to second the comment regarding data speeds, again on continent, lots of folks are reliant of legitimately subpar data connections. I think my Claro data plan gets me about* +/-1mbp, but its usually half of that, and that’s one of the better carriers on in S.A.

I’m sure plenty of folks in the southern US would echo this to an extent, although I know there’s plenty of deer leases in south texas with fibre optic connections and wifi extending hundreds of yards into their fields.

That said, I think my photos included in my observations are horrible, but I would like to think after about 15 years of experience I know my way around a camera, I just happen to have a prosumer rig, that’s highly prone to noise and the limitations of a 300mm zoom on an APS-C sensor. The light is awful here, almost always, and only so much can be done when using an mid-level body when you have to crank the ISO to 1600+ just to allow the shutter speed to be fast enough to get a decent shot without a tripod.

4 Likes

I think more transparency about image policies would be helpful in general. I’d been using iNaturalist for a couple of years before I knew images were being automatically downsized, and probably should have / would have been thinking about non-iNaturalist archiving of cell phone images (my workflow for DSLR images is very different) from the start if I had known. I’m probably not the only one who was thinking, “It’s on iNaturalist, why would I keep copies of everything?”

6 Likes

Thank you!

[EDIT: Just to be clear after reading your post again which suggests some possible confusion – when I say “increasing the compression” this equates to reducing the jpg quality % figure, not increasing it. I am talking about increasing the level of compression applied / the compression ratio.]

I agree with the other commentators here – 99% is really overkill and makes no sense as a strategy in conjunction with downsizing the resolution. For high quality images I would usually use the 80 – 90% range. With the vast majority of images where there are say dozens of pixels per smallest element of meaningful detail, then even pixel peeping, 90% is practically transparent (e.g. good for wildlife shots with a very high level of detail), while at 80% you will see some changes to the pixels, but still with little impact to the appearance of the detail in question (e.g. good for larger subjects, or more average quality images).
The only exception would be when I’m really pushing the limits of pixel resolution, such as a heavily cropped very distant bird, or truly minute macro subject where every single pixel counts, and then I might go up to 95% (and since such images are very small to begin with, it is hardly worth worrying over a few kB difference in file size at that point).
This is why I said it ideally should be a sliding scale depending on the size (number of pixels) in question. Though I can’t see why you would ever want higher than 95% unless you are going through a large number of iterative steps of recompression at each stage.

Good to hear. I also did some webp comparisons earlier this year and found that it is a very complicated picture that is difficult to make a simple conclusion from since the results can vary significantly depending on the image in question (Squoosh is very useful here for side by side comparisons).
Roughly speaking I found webp tends to smooth things out more which can bring significant savings with certain kinds of noise or large regions of low detail (e.g. out of focus regions). But if you’re doing fussy pixel-peeping with very high detail subjects then it no longer seems to bring much advantage over the latest jpg encoders (and sometimes none at all). So I’m not sure if it is an ideal choice for high quality wildlife images, and jpeg-xl looks likely to be the better option in the long term, though obviously webp does have the advantage of being well supported on the web already.


P.S. I only just discovered today that you can quote people by selecting the text in question. This is nice functionality, but not at all intuitive being not at all like any other forum software I have ever used in the past. Maybe a little UI tip somewhere saying something like “select text to quote” would help?

1 Like

Thank you simben for making some great posts here which I generally agree with, particularly about the importance of file size / weight as the metric of importance to target rather than resolution. Though I think even 5 MB is a little on the large size. I can’t think of any of my observations where there is any benefit to having more than 1 or 2 MB at most.
The only exception I can imagine is a very high resolution shot of a whole mass of plant material where we are trying to make an ID based on say a tiny flower down in one corner of the image (though this sort of situation isn’t so rare with uncropped images, but again, in these cases the current downsizing of resolution approach is far more destructive to the end image quality / file size ratio).

There are encoders in which you can choose to target a particular file size, so I assume you can easily enough setup one whereby you limit the output to having both a maximum JPG % quality, as well as a maximum file size. This would then achieve in a simple manner the sliding scale I mentioned above. Smaller images would be only limited by the maximum jpg quality, whereas larger images would be increasingly constrained by the file size limit to have more compression applied to them (reducing the JPG quality %).

I also like your idea for better user education with tips to suggest cropping of images. I have tried to gently suggest this to some users with little success. But I think it is also a more conceptually and technically challenging task to ask users to edit their existing observations compared to cropping at the initial stage of upload (which I suppose many people already have some familiarity with in concept from messaging apps).

1 Like

I’m routinely dealing with plants where we might need to know, for instance: Are the hairs on the stem all eglandular and the same length, or a mixture of shorter glandular hairs and longer eglandular hairs? Or: Do the scales on the lower surface of the leaf have margins that are ciliate their entire lengths, or only in the basal half?

In that context, if we’re really lucky and the observer both has a good macro lens and knows to take a macro photo of those specific features, sure, we can get away with a smaller image. That’s rare, though. If none of your observations would benefit from higher resolution, you’re an exceptional observer.

1 Like

In case it wasn’t clear already – all of my posts in this thread have been intended to be very much in support of allowing higher resolution (taken here to mean “number of pixels”). The bit you have quoted is referring to file sizes (MB), and not resolution (MP). And keep in mind that neither of these parameters equate to resolving power which is what you require to distinguish the features you mention.
So I very much agree in turn with the point made in your post!

3 Likes

Currently I resize my camera photos to 75% which for me is 3672 x 2754, because a typical camera picture itself is not perfect (maximum photo size in a camera tends to exceed the camera’s real limits), especially in suboptimal light levels, and nor is the hand totally still, so I find 75% is realistic without any loss. Close features are done in Macro mode in this same way.

In addition, perhaps any iNaturalist resizing can be done in terms of total pixels not dimensions since crops are not all the same shape.

However very importantly, JXL really does beat the other formats hands down in my view, and it doesn’t create a blocky effect when zoomed in, so this means conscientious nature photographers as they become aware of it will want to store their photos in JXL format and it would be helpful to be able to upload in that format whatever iNaturalist does with it once uploaded…

I shoot with a Z9 that outputs images at a massive 8256 x 5504, but when the item of interest is small I try to crop somewhat tightly :slight_smile: I’ve attached some observations below where I think the high resolution output was actually useful (photos of something very small, or something very far away)

https://www.inaturalist.org/observations/324588633

https://www.inaturalist.org/observations/324921093

https://www.inaturalist.org/observations/324911822

https://www.inaturalist.org/observations/313033692

https://www.inaturalist.org/observations/307733647

I think generally someone who has a camera with that kind of specs would know to crop photos to show the organism in question?

1 Like

Using JPEG XL would be great, but unfortunately Google stubbornly refuses to support it in Chrome. This is probably because it competes with their own WebP format which they’ve been aggressively pushing (with limited success) since 2010. JPEG XL is much nicer than WebP (both for compression and image quality), and JPEG XL has much more support from the tech industry at large, but since Google has a 72% monopoly on the browser market, they effectively get to choose which file formats are viable so of course they are going to choose the ones they control. :disappointed_face:

2 Likes

I use a Z8 (same sensor as Z9) and heavily crop if needed. It allows me to get decent sized images of tiny things like springtails on mushrooms that I’m not focusing on.

4 Likes

Doesn’t storage size of an image increase exponentially as you increase the amount of pixels?

It should go as approximately the square of the number of pixels - multiply or divide both sides by 2 and you increase/decrease the information and file size by 4. The precise figure is going to vary with what the photo is.

1 Like

Google seems to be revising their decision and may bring JPEG XL to chrome, hopefully next year: https://www.phoronix.com/news/JPEG-XL-Possible-Chrome-Back

1 Like