Discussion:
[cairo] [Pixman] Better quality downsampling in cairo/pixman
Jonathan Morton
2010-06-29 07:15:24 UTC
Permalink
2) How comes Gimp's linear is much like cubic? ?Does it seem to do
something more than just bilinear interpolation?
Nearest-neighbour filtering samples only one source pixel per
destination pixel. Bilinear and bicubic filters both sample four
adjacent source pixels per destination pixel. If the scale factor is
1:3 or beyond, source pixels will therefore be missed.

For high-quality scaling at extreme scale factors (as for thumbnails),
the filtering needs to take a number of samples which depends on the
scale factor.
3) I know from past experience that even bicubic can be faulty when it
comes to downsampling by a factor of over 1/4. ?I have solved this
issue by using Lanczos (SinC) filter once. ?Should we consider using
this filter too?
Sinc filtering is indeed a high-quality technique. It can also be rather slow.

Another technique which fulfils the requirements is plain old box
filtering. This might be quicker for animation. When the scale
factor is near 1:1 this needs to switch gracefully to bilinear, or
else it will look like nearest-neighbour.

- Jonathan Morton
Bill Spitzak
2010-06-29 19:21:13 UTC
Permalink
Hi all,
I don't really like to beat the dead horse, but here it goes... :)
(Apparently my first try didn't pass moderation, so I'm retrying now
with more compact attachment.)
The other day I was using Alt+TAB to switch between windows in my
brand new Ubuntu desktop and was unpleasantly surprised how window
thumbnails quality was in striking contrast with the overall eye-candy
of the desktop (other effects seem to use OpenGL).
Not sure if cairo is the case here, but the simple test I made shows
that the best filter it have in the toolbox is bilinear
(cairo-best.png). Gimp beats cairo with it's left hand, when it comes
to downsampling (see gimp-cubic.png). And even gimp-linear.png seems
to be much better than cairo's best attempt!
I'm willing to take my time improving cairo's downsampling
capabilities, if experienced people out there are willing to help me
by answering my naive questions and pointing in the right direction.
:)
1) How hard would it be to add new cairo (or does it belong to
pixman?) filter, say bicubic interpolation?
The problem is the current filtering is incorrect for any scales less
than 1/2. It is not any kind of filtering at all, instead it
interpolates the two pixels nearest the sample point. What interpolation
function it uses is pretty much irrelevant (though it has to be linear
for it to make any sense at all, in that case it is equivalent to a box
filter for scales from 1/2 to infinity).

There was a flurry of work on this at one time but I can't remember
what. The general impression is that for maximum efficiency perhaps
Cairo should do part of the step: it could box-filter the image down to
an intermediate image so that the scaling is in the range 1/2-infinity,
then let unchanged pixmap do the rest. Cairo would then preserve this
scaled image until a different one was requested or the source surface
was changed, because it is likely a similar scale would be needed again.
There was lots of talk about mipmaps but that only saves time if the
image will be drawn many times at many different scales, which seems to
not be Cairo's main use.

There is also some interest in new filtering for zooming in, many users
prefer to see the pixels drawn as antialised squares (like OSX does),
rather than smooth gradents (Linux & Windows). I think that would have
to be addressed in pixman and means the interpolation function is a Z
shape, 0 and 1 at the ends with a linear slope in the middle where the
steepness depends on the scale.
2) How comes Gimp's linear is much like cubic? Does it seem to do
something more than just bilinear interpolation?
"Linear" probably means a box filter, it is poorly named. If you drew
the filter it would be a rectangle 1 output pixel wide and 1 tall. Slide
this across the pixels and (as long as the input pixels are 1/2 or more
wide) it will intersect exactly t of one pixel and 1-t of the next, with
t varying between 0 and 1. This leads to linear interpolation.

"Cubic" may mean a cubic function for interpolation, but this is
equivalent to a triangle-shaped filter, 2 pixels wide at the base. Or it
can mean a cubic-shaped filter. So this term is misleading.
3) I know from past experience that even bicubic can be faulty when it
comes to downsampling by a factor of over 1/4. I have solved this
issue by using Lanczos (SinC) filter once. Should we consider using
this filter too?
I suspect there was some other problem other than the filter selection
in this case. More likely the entire algorithm was altered to one that
applies the filter correctly.
Soeren Sandmann
2010-06-30 15:57:32 UTC
Permalink
Hi,
I don't really like to beat the dead horse, but here it goes... :)
The horse is far from dead. We really do need better image scaling, so
thanks for looking into it.
Not sure if cairo is the case here, but the simple test I made shows
that the best filter it have in the toolbox is bilinear
(cairo-best.png). Gimp beats cairo with it's left hand, when it comes
to downsampling (see gimp-cubic.png). And even gimp-linear.png seems
to be much better than cairo's best attempt!
I'm willing to take my time improving cairo's downsampling
capabilities, if experienced people out there are willing to help me
by answering my naive questions and pointing in the right direction.
:)
1) How hard would it be to add new cairo (or does it belong to
pixman?) filter, say bicubic interpolation?
The place to add new filters is in pixman. Here are two old mails on
the subject:

http://lists.freedesktop.org/archives/cairo/2009-June/017498.html

http://lists.cairographics.org/archives/cairo/2009-November/018600.html

Simply adding a bicubic interpolation filter is relatively
straight-forward. All you have to do it write a new
bits_image_fetch_pixel_bicubic() in pixman-bits-image.c.

However, while a bicubic interpolation is better than bilinear and
would make upscaling look better, it wouldn't have that much effect on
downscaling. For downscaling, the basic problem is that we are
ignoring way too many source pixels, and not so much the quality of
the filter function itself.

The suggestion in the two mails above is therefore to add another
filtering stage where the transformed image is resampled using a
configurable filter kernel with a configurable sampling rate.

A first implementation could just offer a box filter. With resampling,
even a relatively poor filter like that would produce vastly better
results than what we have now while not being too complicated to
implement. There is a bit more detail in the second of the mails.
3) I know from past experience that even bicubic can be faulty when it
comes to downsampling by a factor of over 1/4. I have solved this
issue by using Lanczos (SinC) filter once. Should we consider using
this filter too?
What you were seeing was likely just the issue that the bicubic filter
was run on too few sources pixels. If you use a bicubic filter with
enough samples

But even with proper resampling, I think it might make sense to have
high-quality filters such as Lanczos available. A gaussian filter
would also be interesting since gaussian blur is a pretty useful
effect.


Soren
Alexander Shulgin
2010-06-30 20:08:26 UTC
Permalink
Post by Soeren Sandmann
The place to add new filters is in pixman. Here are two old mails on
http://lists.freedesktop.org/archives/cairo/2009-June/017498.html
http://lists.cairographics.org/archives/cairo/2009-November/018600.html
Simply adding a bicubic interpolation filter is relatively
straight-forward. All you have to do it write a new
bits_image_fetch_pixel_bicubic() in pixman-bits-image.c.
These look like good starting points, thanks!

--
Alex
Alexander Shulgin
2010-07-14 06:46:03 UTC
Permalink
Post by Soeren Sandmann
The place to add new filters is in pixman. Here are two old mails on
http://lists.freedesktop.org/archives/cairo/2009-June/017498.html
http://lists.cairographics.org/archives/cairo/2009-November/018600.html
Simply adding a bicubic interpolation filter is relatively
straight-forward. All you have to do it write a new
bits_image_fetch_pixel_bicubic() in pixman-bits-image.c.
However, while a bicubic interpolation is better than bilinear and
would make upscaling look better, it wouldn't have that much effect on
downscaling. For downscaling, the basic problem is that we are
ignoring way too many source pixels, and not so much the quality of
the filter function itself.
The suggestion in the two mails above is therefore to add another
filtering stage where the transformed image is resampled using a
configurable filter kernel with a configurable sampling rate.
Pardon my silence, some stuff was keeping me too busy for this. :)

So if I understand correctly, Jeff's patch is good, but it's too
specialized for resampling pixman images wholesale, while it would be
better to make possible resampling for individual destination pixels
when compositing?

While I'll start digging in that direction, please make any notes to
correct me if I'm not quite right here.

I'll try to use sample Emacs screenshots from my first mail in this
thread for testing. Does anyone have any good samples to demonstrate
resampling quality difference which I might find useful?

--
Regards,
Alex
Soeren Sandmann
2010-07-15 14:48:23 UTC
Permalink
Post by Alexander Shulgin
So if I understand correctly, Jeff's patch is good, but it's too
specialized for resampling pixman images wholesale,
Basically, Jeff's patch only did scaling, but pixman supports
arbitrary transformations and we need resampling in those cases
too. It may be interesting to look into adding Jeff's code as fast
paths for scaling.
Post by Alexander Shulgin
while it would be better to make possible resampling for individual
destination pixels when compositing?
Yeah, that's the general idea.

Right now, pixman_image_composite (src, mask, dest) works more or less
like this:

For each destination pixel, a transformed location in the source and
mask images is computed. Then, based on the filter attributes,
interpolated values are computed for those locations. Finally, those
values are composited together with the destination pixel and written
back to the destination.

We need to modify this algorithm to work like this:

For each destination pixel, several transformed source/mask
locations are computed corresponding to a subpixel grid in the
destination pixel. The interpolated values for these locations are
then averaged together before being composited.
Post by Alexander Shulgin
I'll try to use sample Emacs screenshots from my first mail in this
thread for testing. Does anyone have any good samples to demonstrate
resampling quality difference which I might find useful?
The emacs screenshot is an okay image to use since downscaling text
images is a fairly frequent use case. This one:

Loading Image...

is also a good downscaling test because it is big image with lots of
detail.

The Adobe RGB test image is another one:

Loading Image...

Finally, there is the zone plate, a well-known torture test for image
scalers:

Loading Image...

The attraction of this image is that it contains all representable
frequencies, so poor downscalers will tend to create strange Moire
patterns.


Soren
Bill Spitzak
2010-07-15 18:25:33 UTC
Permalink
Post by Soeren Sandmann
For each destination pixel, several transformed source/mask
locations are computed corresponding to a subpixel grid in the
destination pixel. The interpolated values for these locations are
then averaged together before being composited.
I think this is a poor explanation. The source pixels are not completely
random and approaching it this way will produce a very slow algorithm.

A much better explanation is that for each destination pixel a single
source *AREA* is defined. For the 6-element matrix being used by Cairo,
this source area has 6 degrees of freedom, and can be defined as a
parallelogram mapped to somewhere in the source image (for an arbitrary
3D transform, this source area has 8 degrees of freedom and is an
arbitrary convex quadralateral).

This source area is used to calculate the weighing for all the pixels
from the source image, these weighted values are added to get the
destination pixel. "Filtering" is the algorithm by which the weights are
calculated, a possible one is that the weight is the percentage of the
area that the source pixel intersects, but there are both much better
ones and much faster ones. In particular the weights may be non-zero for
pixels outside the area.

It is very common that the source area is reduced to a simpler object by
throwing away some of the degrees of freedom, before figuring out the
weights. For instance the current implementation is equivalent to
throwing away all the information except the xy center of the shape, and
then calculating the weight as the ratios of the Manhattan distances to
the nearest pixels. This is obviously not good enough.

I think acceptable results are achieved by reducing the shape to the
closest axis-aligned rectangle or ellipse (thus 4 degrees of freedom)
before using it. Some algorithims go further and reduce it to a circle
(3 degrees of freedom), some keep the angle of the ellipse (5 degrees of
freedom). A more restricted shape makes the filtering algorithm much
simpler and thus faster and very often worth it.

I still feel the best approach for Cairo is for the source images to
keep track of a single "scaled" version. This is an integer down-rez of
the original image, the scale selected so that it is the next larger
1/integer over the actual transform. This image is then linearly
interpolated by an unchanged Pixman to produce the final image. The
image is only recalculated if the integer scale changes. This will allow
drawing the same image repeatedly and with changes such as rotation with
the same speed as it has now. Note that this is similar to mip-mapping
but produces better results and may be much more appropriate for the use
of Cairo, especially if 3D transforms are not being supported anyway.

There also needs to be a fix for how Cairo does scale-up. It needs to do
non-fuzzy edges when the source is black-padded, and I now think it
should also render the pixels as rectangles. This will require
alterations to how Pixman does it's linear interpolation.
Soeren Sandmann
2010-07-15 21:53:32 UTC
Permalink
Bill Spitzak <spitzak at gmail.com> writes:


[replying out of order]
Post by Bill Spitzak
I still feel the best approach for Cairo is for the source
images to keep track of a single "scaled" version. This is an
integer down-rez of the original image, the scale selected so
that it is the next larger 1/integer over the actual
transform. This image is then linearly interpolated by an
unchanged Pixman to produce the final image. The image is only
recalculated if the integer scale changes. This will allow
drawing the same image repeatedly and with changes such as
rotation with the same speed as it has now. Note that this is
similar to mip-mapping but produces better results and may be
much more appropriate for the use of Cairo, especially if 3D
transforms are not being supported anyway.
I agree that the 1/integer approach is probably a good one for
cairo. Note that the supersampling algorithm for pixman will allow
this, and allow it to happen at full speed.

For example, if the closest 1/integer factor is 5x3, then cairo would
set a supersampling grid of 5x3 with a NEAREST interpolation and the
result from pixman would be precisely the obvious box averaging. It
would run at a reasonable speed too, even with a quite naive
implementation. The total per-source-pixel cost would be a couple of
bit shifts and a couple of additions.

With that in place, it would then make a lot of sense to look into
adding Jeff's code as fast paths to make integral downscaling really
fast.

At the same time, the super sampling algorithm would not break down
horribly in non-integer cases such as rotations, and it has the
advantage that it has well-defined results for gradients and a
hypothetical polygon image. See below for more justification for the
supersampling algorithm.
Post by Bill Spitzak
There also needs to be a fix for how Cairo does scale-up. It needs
to
do non-fuzzy edges when the source is black-padded,
This has been discussed a couple of times already. I think the outcome
each time has been that (a) there is no clear way to precisely define
what it actually means as a filter, and (b) you can already get the
desired result by setting the repeat mode to PAD and clipping to a
suitably scaled rectangle. If you render that to a group, then you
have a black padded, scaled image with sharp borders.
Post by Bill Spitzak
and I now think it should also render the pixels as rectangles. This
will require alterations to how Pixman does it's linear
interpolation.
What does this mean? NEAREST upscaling will certainly give you
rectangles, but presumably you mean something else.
Post by Bill Spitzak
Post by Soeren Sandmann
For each destination pixel, several transformed source/mask
locations are computed corresponding to a subpixel grid in the
destination pixel. The interpolated values for these locations are
then averaged together before being composited.
I think this is a poor explanation. The source pixels are not
completely random and approaching it this way will produce a very
slow
algorithm.
It was not intended as a general explanation of rescaling
algorithms. It was intended to explain the super sampling approach I'm
advocating. I don't think it will result in a very slow algorithm,
particularly not for integer downscaling.
Post by Bill Spitzak
A much better explanation is that for each destination pixel a
single
source *AREA* is defined. For the 6-element matrix being used by
Cairo, this source area has 6 degrees of freedom, and can be defined
as a parallelogram mapped to somewhere in the source image (for an
arbitrary 3D transform, this source area has 8 degrees of freedom
and
is an arbitrary convex quadralateral).
This source area is used to calculate the weighing for all the
pixels
from the source image, these weighted values are added to get the
destination pixel. "Filtering" is the algorithm by which the weights
are calculated, a possible one is that the weight is the percentage
of
the area that the source pixel intersects, but there are both much
better ones and much faster ones. In particular the weights may be
non-zero for pixels outside the area.
A full understanding of image transformations I think requires a
signal processing approach. In particular, when you say that the
destination is mapped to a parallelogram in the source image and you
talk about the source pixels that it intersects, you are implicitly
assuming that pixels are rectangular areas. That's a useful model in
many cases, but for image manipulation it's usually better to consider
them point samples. In that model, a destination pixel is mapped to a
*point* in the source image.

It then becomes clear that we need *two* filters: one for
interpolation and one for resampling. The interpolation filter
reconstructs points in between pixels and the resampling one removes
high frequencies introduced by the transformation.

One interpolation filter is NEAREST which is effectively treating the
source pixels as little squares. Currently we don't have any
resampling filter (or I suppose our resampling filter is a Dirac
delta) which means we get terrible aliasing from the high frequencies
that we fail to remove.

The resampling filter is an integral computed over the reconstructed
image (which is defined on all of the real plane). This is difficult
to do in a computer, so we have to do some sort of approximation. The
approximation I'm suggesting is to replace it with a sum.

See also these notes that Owen wrote a long time ago:

http://www.daimi.au.dk/~sandmann/pixbuf-transform-math.pdf
Post by Bill Spitzak
It is very common that the source area is reduced to a simpler
object
by throwing away some of the degrees of freedom, before figuring out
the weights. For instance the current implementation is equivalent
to
throwing away all the information except the xy center of the shape,
and then calculating the weight as the ratios of the Manhattan
distances to the nearest pixels. This is obviously not good enough.
I think acceptable results are achieved by reducing the shape to the
closest axis-aligned rectangle or ellipse (thus 4 degrees of
freedom)
before using it. Some algorithims go further and reduce it to a circle
(3 degrees of freedom), some keep the angle of the ellipse (5
degrees
of freedom). A more restricted shape makes the filtering algorithm
much simpler and thus faster and very often worth it.
There are tons and tons of resampling algorithms. Doing it with
supersampling is perhaps a bit unconventional, but there are some
advantages to it:

- It works with arbitrary transformations

- It doesn't have pathological behavior in corner cases.

- It can be applied to gradients and polygons too

- It is conceptually straight forward (it approximates the integral in
the ideal resampling with a sum).

- With a high sampling rate and a good interpolation filter, it will
produce high quality output.

- It is simple to implement in a pixel shader without pre-computing
lookup tables.

- It has explicit tradeoffs between quality and performance. (The
sample rate and the resampling filter).

- You can predict its performance just from the parameters: it will do
this many source pixel lookups per destination pixel; it will do
this much arithmetic per destination pixel.

I think other algorithms generally have shortcomings in one or more of
these areas.


Soren
Krzysztof Kosiński
2010-07-21 22:37:03 UTC
Permalink
Post by Soeren Sandmann
There are tons and tons of resampling algorithms. Doing it with
supersampling is perhaps a bit unconventional, but there are some
Can you describe in more detail how this would work? I would like to
try implementing this algorithm, because Cairo bitmap scaling not good
enough for Inkscape's needs. In particular, I want to use it when
resizing intermediate renderings before applying fixed resolution
filters.

Regards, Krzysztof Kosi?ski
Krzysztof Kosiński
2010-08-03 20:52:50 UTC
Permalink
Trying to resurrect this topic again...

Can someone tell me in what file is the interpolation code?

Regards, Krzysztof
Alexander Shulgin
2010-06-27 05:20:47 UTC
Permalink
Hi all,

I don't really like to beat the dead horse, but here it goes... :)

The other day I was using Alt+TAB to switch between windows in my
brand new Ubuntu desktop and was unpleasantly surprised how window
thumbnails quality was in striking contrast with the overall eye-candy
of the desktop (other effects seem to use OpenGL).

Not sure if cairo is the case here, but the simple test I made shows
that the best filter it have in the toolbox is bilinear
(cairo-best.png). Gimp beats cairo with it's left hand, when it comes
to downsampling (see gimp-cubic.png). And even gimp-linear.png seems
to be much better than cairo's best attempt!

I'm willing to take my time improving cairo's downsampling
capabilities, if experienced people out there are willing to help me
by answering my naive questions and pointing in the right direction.
:)

So my current questions are:

1) How hard would it be to add new cairo (or does it belong to
pixman?) filter, say bicubic interpolation?

2) How comes Gimp's linear is much like cubic? Does it seem to do
something more than just bilinear interpolation?

3) I know from past experience that even bicubic can be faulty when it
comes to downsampling by a factor of over 1/4. I have solved this
issue by using Lanczos (SinC) filter once. Should we consider using
this filter too?

That's all for now.

All the best!
--
Alex
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cairo-resampling-test.c
Type: text/x-csrc
Size: 839 bytes
Desc: not available
URL: <http://lists.cairographics.org/archives/cairo/attachments/20100627/fd8c8ee9/attachment-0001.c>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: src.png
Type: image/png
Size: 88645 bytes
Desc: not available
URL: <Loading Image...>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cairo-best.png
Type: image/png
Size: 8949 bytes
Desc: not available
URL: <Loading Image...>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gimp-cubic.png
Type: image/png
Size: 13248 bytes
Desc: not available
URL: <Loading Image...>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gimp-linear.png
Type: image/png
Size: 10764 bytes
Desc: not available
URL: <Loading Image...>
Bill Spitzak
2010-08-13 05:29:14 UTC
Permalink
Cairo currently does only bilinear interpolation.

This is wrong for any scale less than 1, although the artifacts do not
really appear until a scale less that .5. Bilinear cannot use more than
4 adjacent pixels in a square to produce an output pixel, and if you
scale small enough the destination pixel will cover far more than 4
pixels in the source.

I think everybody is in agreement that this needs to be fixed.
Alexander Shulgin
2010-06-29 07:04:43 UTC
Permalink
Hi all,

I don't really like to beat the dead horse, but here it goes... :)

(Apparently my first try didn't pass moderation, so I'm retrying now
with more compact attachment.)

The other day I was using Alt+TAB to switch between windows in my
brand new Ubuntu desktop and was unpleasantly surprised how window
thumbnails quality was in striking contrast with the overall eye-candy
of the desktop (other effects seem to use OpenGL).

Not sure if cairo is the case here, but the simple test I made shows
that the best filter it have in the toolbox is bilinear
(cairo-best.png). Gimp beats cairo with it's left hand, when it comes
to downsampling (see gimp-cubic.png). And even gimp-linear.png seems
to be much better than cairo's best attempt!

I'm willing to take my time improving cairo's downsampling
capabilities, if experienced people out there are willing to help me
by answering my naive questions and pointing in the right direction.
:)

So my current questions are:

1) How hard would it be to add new cairo (or does it belong to
pixman?) filter, say bicubic interpolation?

2) How comes Gimp's linear is much like cubic? Does it seem to do
something more than just bilinear interpolation?

3) I know from past experience that even bicubic can be faulty when it
comes to downsampling by a factor of over 1/4. I have solved this
issue by using Lanczos (SinC) filter once. Should we consider using
this filter too?

That's all for now.

All the best!
--
Alex
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cairo-resampling-test.tar.gz
Type: application/x-gzip
Size: 116627 bytes
Desc: not available
URL: <http://lists.cairographics.org/archives/cairo/attachments/20100629/a3ce5cc7/attachment-0001.bin>
Owen Taylor
2010-08-13 18:00:10 UTC
Permalink
The other day I was using Alt+TAB to switch between windows in my
brand new Ubuntu desktop and was unpleasantly surprised how window
thumbnails quality was in striking contrast with the overall eye-candy
of the desktop (other effects seem to use OpenGL).
I think pixman and cairo are pretty irrelevant for this. Window
downscaling for previews needs to:

A) Leave the images in video memory
B) Be fast and hardware accelerated

The basic way that GPU's sample images is bilinear sampling. As we see
with Cairo currently, that doesn't work nicely once you are scaling down
more than a factor of two. In a normal game, the way that things work is
that the game provides a version of the image with multiple level of
details (a mipmap) and the GPU picks the closest one to avoid large
amounts of scaling. Or there are extensions for automatic mipmap
generation.

Automatic mipmap generation doesn't really mix with the
"texture_from_pixmap" extension used for OpenGL compositors in X.
(Certainly not any of the free drivers), but it's possible to emulate
the effect with a bit of programming. I wrote code to do this for
Mutter, the GNOME 3 window manager. The effect isn't as good as you
could do with real high quality filtering, but it's vastly better than
what you are seeing.

- Owen
Alexander Shulgin
2010-08-15 11:30:31 UTC
Permalink
Post by Owen Taylor
The other day I was using Alt+TAB to switch between windows in my
brand new Ubuntu desktop and was unpleasantly surprised how window
thumbnails quality was in striking contrast with the overall eye-candy
of the desktop (other effects seem to use OpenGL).
I think pixman and cairo are pretty irrelevant for this. Window
?A) Leave the images in video memory
?B) Be fast and hardware accelerated
The basic way that GPU's sample images is bilinear sampling. As we see
with Cairo currently, that doesn't work nicely once you are scaling down
more than a factor of two.
Thanks for your comment, Owen.

Unfortunately, I'm still too busy with other things (i.e. life). Only
if I could convince my employer to give me some time to work on this
(as we use a backwards hack to overcome the downscaling problems
currently)...

OTOH, I see Krzysztof has taken this problem more seriously than I
did, so there's hope for some progress soon--thanks, Krzysztof!

--
Alex

Loading...