NetSurf on ARMv7 platforms
by John-Mark Bell
Afternoon,
So, NetSurf runs mostly ok on ARMv7 platforms with their backwards
incompatible unaligned LDR behaviour turned on. Tinct, however, appears
to perform some LDRs from unaligned addresses, resulting in crashes.
Here's a backtrace:
> Fatal signal received: Segmentation fault
>
> Stack backtrace:
>
> Running thread 0x59e53c
> ( 5a7ee0) pc: 462860 lr: 74f4c sp: 5a7ee4 __write_backtrace()
> ( 5a7f10) pc: 74d64 lr: 463298 sp: 5a7f14 ^ro_gui_signal()
> ( 5a7f38) pc: 463288 lr: 462f58 sp: 5a7f3c __unixlib_exec_sig()
> ( 5a7fa0) pc: 462a18 lr: 463870 sp: 5a7fa4 __unixlib_raise_signal()
> ( 5a7fb0) pc: 463774 lr: 5a6c48 sp: 5a6c04 __h_cback()
>
> Register dump at 005a7fb4:
>
> a1: 12f a2: 2d a3: 1f4 a4: 0
> v1: 0 v2: 0 v3: 202513b4 v4: 1
> v5: 3d5 v6: 0 sl: 41b78d39 fp: 67197124
> ip: 20248394 sp: 5a6c04 lr: 5a6c48 pc: 2024609c
> cpsr: 80000113
>
> 20246088 : .p.â : e21a7003 : ANDS R7,R10,#3
> 2024608c : .... : 0a000008 : BEQ &202460B4
> 20246090 : .àgâ : e267e003 : RSB R14,R7,#3
> 20246094 : ..^á : e15e0000 : CMP R14,R0
> 20246098 : .à Á : c1a0e000 : MOVGT R14,R0
> 2024609c : .??å : e59a9000 : LDR R9,[R10,#0]
> 202460a0 : ..@Ð : d040000e : SUBLE R0,R0,R14
> 202460a4 : ..?Ò : d2800003 : ADDLE R0,R0,#3
> 202460a8 : ..?À : c0800007 : ADDGT R0,R0,R7
>
> Invalid pc address bebeec
For reference, Tinct is loaded at 202416B4.
J.
13 years, 3 months
configuring Netsurf-framebuffer
by Moutusi De
Hi All,
I have compiled Netsurf {"netsurf-r9577.tar.gz"}
I have built it with Frame buffer option.
{Just to give a background: we are working on an embedded hand held
device, with internet connectivity}
I couldn't find any option to configure the proxy or port on this
version of Netsurf {From my company to access external website, I need
to configure proxy settings}.
Hence when I run/open Netsurf, I get the following error message.
-----------------------------------------------
"Sorry Netsurf was unable to display this page.
Failed Connect to www.netsurf-browser.org:80: Resource temporarily
unavailable"
-----------------------------------------------
Could anyone please let me know how to resolve this ?
Just to add. When I tried to connect to our company intranet, it was
working fine.
Thanks,
Moutusi.
13 years, 11 months
Hotlist launch behaviour
by Gavin Wraith
Aplogies if this topic has already been covered. I searched the archive
on 'hotlist' but did not find anything relevant. I am using r9592 on
RISC OS 5.14. I note that if one selects a directory in the hotlist
and then launches it then all the urls within it, and in its subdirectories
recursively, are launched; each in its own window. Can this really be
the intended behaviour? I have hundreds, if not thousands, of urls in
the hotlist, and I cannot conceive of an occasion when I would would want
to launch more than a few of them simultaneously. The effect is to
cause a crash: I cannot then quit NetSurf from either the iconbar or from
the TaskManager window.
In 2005 I mentioned in Archive-Online that I preferred to use the filer
and files of type URL than use dedicated hotlists. I got slapped down,
in a sense, because lots of people expect to use a hotlist, and because
the filer, as a database, is rather restricted - it does not keep a record
of usage, for example. In my case I do not need such extras. But I still
think that the hotlist is a reduplication of functionality.
I would like to suggest that the current behaviour of launching a
directory in the hotlist be modified so as not to recurse down to
subdirectories. Or, perhaps there should be a configuration option
to control this?
--
Gavin Wraith (gavin(a)wra1th.plus.com)
Home page: http://www.wra1th.plus.com/
13 years, 12 months
Re: Hotlist launch behaviour
by John-Mark Bell
On Sun, 2009-09-27 at 12:59 +0100, Gavin Wraith wrote:
> In message <1254051536.30649.2.camel@duiker> you wrote:
>
> > As for whether it's intended behaviour or not, I can't recall, I'm
> > afraid. I suspect it is intentional, otherwise I can't imagine why
> > effort would have been expended implementing it.
>
> Perhaps "intend" covers a range of possibilities, incorporating
> different levels of forethought ;). Does the core enforce an
> upper bound on the number of windows opened?
No. There's no such artificial limit.
> Unless rather unusual display facilities are to be catered for, I
> cannot imagine anybody wanting more than, say, 32 windows open at
> the same time.
Well, on those platforms where tabbed browsing is supported, the core
makes no distinction between a tab and a window. Many people have
hundreds of open tabs in Firefox, for example. Thus artificially
limiting the number of windows in NetSurf seems counterproductive.
> > This is one of the things that has been moved into the core code, but we
> > can't merge it because it will break the RISC OS frontend.
>
> Again, apologies if this has already been covered. Is it easy to deduce
> from the core code what sort of API a new, more compliant, version (of
> the RISC OS frontend) should expose? Is there a header file (or group
> of header files) that constitute the common interface between the
> core and the several platforms?
The treeview is documented here:
http://wiki.netsurf-browser.org/Documentation/Treeview
The hotlist, cookies, global history, and ssl certificate display APIs
can be found in the relevant header files in the desktop/ directory.
You'll need to check out the following branch, as the relevant changes
to the core are not on trunk, as already explained:
svn://svn.netsurf-browser.org/branches/paulblokus/treeview
J.
13 years, 12 months
String internment contexts and content reuse
by John-Mark Bell
Hi,
The current way in which we manage string internment contexts means that
we cannot share stylesheets between documents. This introduces somewhat
unnecessary fetching and conversion of duplicate copies of a stylesheet
when browsing between pages on a site.
For example:
http://example.org/ uses /foo.css and /bar.css. It has a link
to /baz.html.
/baz.html also uses /foo.css and /bar.css.
The user navigates to http://example.org/. We fetch and render the page,
which gives us 3 contents: one for the HTML and one for each of the
stylesheets. The stylesheets are created using the string internment
context used by the HTML content.
The user clicks the link to /baz.html. We fetch and render the page,
which also gives us 3 contents: one for the HTML and one for each of the
stylesheets. The stylesheets are created using the string internment
context used by /baz.html.
It's obvious, therefore, that we're duplicating the contents for foo.css
and bar.css for the sole reason that the internment contexts are
different.
It would be good if we could design a solution that avoids the need for
these duplicate CSS contents, but doesn't result in one internment
context for the entire process. As an additional complication, scripting
and CSSOM will require that CSS contents are copy-on-write.
Does anyone have any bright ideas?
J.
14 years
Bitmaps
by John-Mark Bell
Hi,
I've spent the last couple of days playing with in-memory bitmap
representations in libnspng (a toy that I'm not intending that we use --
it's just a useful playground for me right now).
Currently, we store bitmaps in memory as 32bpp buffers. This is pretty
inefficient in memory usage.
What I've currently got is probably a bit too dependent on PNG, but it's
showing promise. The in-memory format of image data is as follows:
+ Each scanline is stored in its original format (after removing any
PNG filtering), compressed using LZF. [Original format means one of
Greyscale, RGB, Paletted, Greyscale + Alpha, RGBA, at various bits
per component -- see the PNG spec for the full details].
+ Interlaced images have more scanlines than necessary. The extra
scanlines are treated as above.
This allows easy redraw of bitmap sections -- only decompress the
relevant scanlines.
The upshot of this is that memory requirements for bitmaps becomes
significantly smaller -- taking the screenshots directory (and
subdirectories) from the website source tree results in the compressed
format described above requiring, on average, 12.26% of the memory that
an uncompressed 32bpp buffer needs. This is a pretty major improvement
(not least as it actually allows me to render huge images on a machine
with little available RAM).
On the downside, processing and rendering is slower -- I've not measured
how much slower yet, especially as there's much scope for improvement:
+ Deinterlace images before storing them -- avoids the need to
consider interlacing during redraw.
+ Make all scanlines RGB(A) before compressing them -- avoids the
need to consider the possible scanline types during redraw.
(Completely dropping the alpha channel for opaque images provides
a free 25% reduction in space requirements.)
+ Compress chunks of 8 scanlines, say, to amortise the cost of
(de)compression, particularly during redraw.
Some questions:
1) Is it worth pursuing this further?
2) What kind of API do people want? Right now, when wanting to redraw,
the client passes a row callback which is called for every scanline
in the specified clipping region. The bitmap data fed to the
callback is uncompressed 32bpp RGBA. The callback is responsible for
actually outputting the scanline (including any alpha blending,
colour-space conversion, and scaling).
Given that changing the way in which bitmaps work impacts many bits of
the core and frontends, I'd not be intending to even start a transition
until we've sorted out content caching.
J.
14 years
Caching
by John-Mark Bell
Hi,
Our existing content caching strategy breaks down in the face of string
internment contexts being shared between HTML documents and the
stylesheets they reference. There's potential for bad things like
reading through stale string pointers to occur.
To avoid this badness, we've got a hack that prevents all unknown
content types from being shared. This is a recent introduction, as a
result of discovering the badness described above.
The downside is that it effectively ensures that *no* contents are
shared. Thus, our memory usage is now sky high.
I've sketched out an improved cache design [1], that removes these
problems (and adds scope for disk caching, too, although that's a
secondary concern). I'd appreciate comments on this, as it's entirely
possible that I've missed something.
J.
1. http://source.netsurf-browser.org/trunk/netsurf/Docs/ideas/cache.txt
14 years
review of main gtk branch
by Mark B
Hi,
In pursuance of the goal of Eventually merging the gtk changes, I've
produced a diff of the changes to the gtkmain branch since 16th
September, that should be relevant to the final review
Best
Mark
http://www.halloit.com
Key ID 046B65CF
14 years
[PATCH] touchscreen fixes for gtk frontend
by Graham Gower
Hi,
I'm playing with netsurf-2.1 on a mipsel touchscreen device (64mb ram,
380MHz). I'm happy to say that netsurf performs quite well on these resources,
although more ram would be nice for larger pages. However, the gtk frontend
appears to not work very well in conjunction with a touchscreen.
The first chunk below makes drag scrolling usable. Without it, small movements
cause netsurf to scroll far too much.
The second chunk inhibits propagation of motion events when they are very
small. Small motion events occur all the time when making touches on a
touchscreen and as a result, the BROWSER_MOUSE_CLICK_* states were very
difficult to trigger. This made tapping on links virtually impossible.
-Graham
--- netsurf.orig/gtk/gtk_window.c
+++ netsurf/gtk/gtk_window.c
@@ -220,6 +220,7 @@
GDK_BUTTON_PRESS_MASK |
GDK_BUTTON_RELEASE_MASK |
GDK_POINTER_MOTION_MASK |
+ GDK_POINTER_MOTION_HINT_MASK |
GDK_KEY_PRESS_MASK |
GDK_KEY_RELEASE_MASK);
GTK_WIDGET_SET_FLAGS(GTK_WIDGET(g->drawing_area), GTK_CAN_FOCUS);
@@ -344,6 +345,11 @@
bool shift = event->state & GDK_SHIFT_MASK;
bool ctrl = event->state & GDK_CONTROL_MASK;
+#define DIFF(a,b) (a>b?a-b:b-a)
+ if (DIFF(event->x, g->last_x) < 5.0
+ && DIFF(event->y, g->last_y) < 5.0)
+ return;
+
if (g->mouse->state & BROWSER_MOUSE_PRESS_1){
/* Start button 1 drag */
browser_window_mouse_click(g->bw, BROWSER_MOUSE_DRAG_1,
14 years