On Fri, 2009-09-18 at 11:19 +0100, Michael Drake wrote:
In article <1253233759.5804.394.camel@duiker>,
John-Mark Bell <jmb(a)netsurf-browser.org> wrote:
> What I've currently got is probably a bit too dependent on PNG, but it's
> showing promise. The in-memory format of image data is as follows:
> + Interlaced images have more scanlines than necessary. The extra
> scanlines are treated as above.
Do you mean while an interlaced PNG is partially decoded? Are those extra
scanlines discarded when they become unnecessary?
No. Adam7 interlacing builds up a full image from 7 sub-images. The
current way that interlaced PNGs are handled is to store each of the sub
images, then populate the full scanline from them.
> This allows easy redraw of bitmap sections -- only decompress
> relevant scanlines.
Just to be clear, you mean only decompress the relevant scanlines from the
intermediate LZF compressed state (as opposed to original source data)
when they're needed for redraw?
Yes. In the case of PNGs, the source image data is not conducive to
on-the-fly processing of parts of the image. Additionally, performing
that processing is significantly more complicated than manipulating LZF
compressed scanline data (and hence would have dire redraw performance).
> On the downside, processing and rendering is slower I've not
> how much slower yet, especially as there's much scope for improvement:
> + Deinterlace images before storing them -- avoids the need to
> consider interlacing during redraw.
> + Make all scanlines RGB(A) before compressing them -- avoids the
> need to consider the possible scanline types during redraw.
> (Completely dropping the alpha channel for opaque images provides
> a free 25% reduction in space requirements.)
> + Compress chunks of 8 scanlines, say, to amortise the cost of
> (de)compression, particularly during redraw.
I guess doing things in chunks of scanlines will be needed for formats
like JPEG anyway.
I don't see any reason why JPEG would need chunks of scanlines when PNGs
> Some questions:
> 1) Is it worth pursuing this further?
Yes, the memory savings are certainly worthwhile, but I think we need to
know if the speed penalty is going to be significant.
Quite, but right now, there's no way of measuring that sensibly.
> 2) What kind of API do people want? Right now, when wanting to
> the client passes a row callback which is called for every scanline
> in the specified clipping region. The bitmap data fed to the
> callback is uncompressed 32bpp RGBA. The callback is responsible for
> actually outputting the scanline (including any alpha blending,
> colour-space conversion, and scaling).
So the front end would make the calls to plot each scanline? So e.g. for
the RISC OS front end, we'd call Tinct once per image scanline instead of
once per image? Or would the front end build the scanlines into a single
32bpp bitmap and plot that in one?
What the front end does, is up to it -- I don't care :) I'm not sure
that calling Tinct for each scanline is at all worthwhile. For starters,
it prevents it performing error diffusion when downsampling. You'd also
have to wrap the scanline up in a Sprite, which seems silly.
I guess having 8 scanline chunks would make this more efficient too.
I wasn't proposing presenting the client with chunks of scanlines.
Maybe making the core do the image scaling is something we could
too. In the future we may need to plot rotated bitmaps as well. E.G. in
Is that sensible? I'm sure platform imaging toolkits will be far more
> Given that changing the way in which bitmaps work impacts many
> the core and frontends, I'd not be intending to even start a transition
> until we've sorted out content caching.
Yeah. I also think that for bitmaps we should consider deferring the
decoding of images (from original source data) to when they're going to be
displayed. At the moment we decode big images that may be at the bottom
of a page that the user never scrolls down to see.
Right now, image data is processed as it's received off the network.
There's currently no way for the image content handlers to know whether
their content is going to be used immediately or not.