Open Bug 983050 Opened 10 years ago Updated 2 years ago

Do not store image source in memory if memory is constrained

Categories

(Core :: Graphics: ImageLib, defect, P3)

All
Gonk (Firefox OS)
defect

Tracking

()

People

(Reporter: kanru, Unassigned)

References

(Depends on 2 open bugs)

Details

(Keywords: perf, Whiteboard: [c=memory p= s= u=tarako] [MemShrink:P2][demo])

+++ This bug was initially created as a clone of Bug #945161 +++

With decode on draw we still cache the image source in memory for future decoding. This could use a lot of memory if the image source is big.

Store the image source in file cache or download them only when needed.
Depends on: 983051
Doesn't the OOM handler throw this cache out? If so, why eagerly avoid caching?
Depends on: 983056
(In reply to Andreas Gal :gal from comment #1)
> Doesn't the OOM handler throw this cache out? If so, why eagerly avoid
> caching?

No, we only discard the decode image but not the source data. When the source data is downloaded it resides in memory until RasterImage is deleted. Try http://www.boston.com/bigpicture on the phone and you will know what I mean.
Depends on: 984759
No longer depends on: 984759
Could we back the memory with the image file?
(In reply to Thinker Li [:sinker] from comment #3)
> Could we back the memory with the image file?

Seth said that we could see if we are able to pin a necko cache for a period of time, perhaps bound to the docShell's life time.
Depends on: ddd
(In reply to Kan-Ru Chen [:kanru] (UTC+8) from comment #4)
> Seth said that we could see if we are able to pin a necko cache for a period
> of time, perhaps bound to the docShell's life time.

What we should be doing is ensuring that we don't have a copy of the image's source data in memory *both* in the HTTP cache and in the ImageLib cache. It makes sense, for this reason, to pin the HTTP cache entry and read it back out of the HTTP cache if needed.

We almost certainly do not want to have *zero* copies of the image's source data in memory, though. That makes downscale-during-decode impossible and also means that we can never discard the image. Since the decoded form of images is *huge* compared to the source data, that is a net loss in virtually every scenario.
Moving to p3 because no activity for at least 1 year(s).
See https://github.com/mozilla/bug-handling/blob/master/policy/triage-bugzilla.md#how-do-you-triage for more information
Priority: P2 → P3
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.