GSoC Report
by Adam Blokus
Hi,
I am greatly disappointed with this week's work of mine. Although I finished
my idea of the fuzzy margins with visible effects ( you look how
http://www.netsurf-browser.org/documentation/guide is printed - a lot of text
breakings are fixed), I wasted a lot of time with figuring out the maths and
geometry (PDF files have the y-coordinate rising upwards and the (0,0) point
in the bottom-left corner of the page) - I made a lot of stupid errors while
coding it the first time. I'm in doubt, whether to use this method for the
top of pages too. The top of the page is the one more visible and the fuzzy
margins will probably make a more visible difference.
I moved also some responsibility for margins etc. to the plotter code, to make
printing less dependant on the way margins and page sizes are set.
For this week I plan to change the layouting for printed content, so that all
of it will be fit in the available width (the 'loose' layout that I wrote
about last week).
Adam
14 years, 11 months
Hubbub process report
by Andrew Sidwell
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I'd totally forgotten about the weekly status updates, but given that
I've just finished the majority of the tokeniser work, now would be a
good time to summarise what I've been working on to those who haven't
been following the IRC discussion.
In short, I have:
~ - imported libcss's "coverage" makefile target to make viewing test
coverage data easier
~ - updated the tokeniser to the current HTML5[1] specification (as of
15th June 2008)
~ - updated the testsuite to the current html5lib[2] suite, and wrote
some new tests (contributed back to html5lib), such that the tokeniser
now has 99.3% test coverage
~ - committed stray printfs/#include <stdio.h>s and uncommitted them
multiple times
~ - moved to using assert() where that's the desired semantic
[1]
http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenisation....
[2] a python/ruby HTML5 parser project which has a fairly comprehensive
testsuite
So; the tokeniser is now functionally complete, at least until the spec
changes again, which it doesn't look like it will do soon. There are a
number of cleanups that I would like to make, so I may spend a day or
two making the code more pleasing.
Having spent some time hacking hubbub, I can see my original schedule
assigns time to the wrong places. I have a two-week period of
test-writing for the treebuilder which seems a little silly, since the
html5lib tests are fairly comprehensive-- I would now expect to only
write a few tests (though more than I wrote for the tokeniser).
In the light of this, my current goal is to complete treebuilder work,
with similar test coverage to the tokeniser, by the mid-term evaluation
(end of week 6; 6th July). This represents slippage of a week over my
original schedule. After that, I will bind hubbub to libxml2, and then
via that to NetSurf, to be completed by end of week 7.
At this point, I expect that people outside the NetSurf community may
want to browse with it to see what an HTML5 browser is like. The rest
of the project I will then spend optimising, refactoring, and sending
spec feedback where the spec mandates bad behaviour.
Also, there will be some disruption to my internet access when I move to
a new flat whose lease starts 1st July; I expect I'll neglect unpacking
stuff when I move because hubbub is more interesting, so there's not too
much to worry about. :)
Cheers,
a.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFIWcUnnTTdvBpPNwsRAs2/AKCbZM4SDf2sxlA9ZAWADodxkqHemgCeMGBk
rt+r0pGn0t3Q4cfviHnKDaw=
=usTG
-----END PGP SIGNATURE-----
14 years, 11 months
GSoC Progress Report (June 16)
by Mike Lester
Since my last update I've accomplished quite a bit (at least it seems that
way). After finishing up the splitting of most dialogs from the main code, I
began to work on adding text selection and dragging capability. The mouse
and drag behavior is pretty much sorted out now and the best part is, text
selection works! Unfortunately I will not be able to get much work done this
next week as I'll be vacationing with my family in Colorado, but I look
forward to continuing my work as soon as I return (June 22).
Accomplished this week:
- Implemented mouse drags and settled on a clicking scheme
- When a mouse button is pressed the state is considered PRESS which
basically means that it is waiting to either become a "Click" or a "Drag".
- Clicks are considered a press and release with no movement.
- If the mouse is moved while its state is PRESS it is considered a
Drag.
- Modifier keys are only accepted if held when the mouse is pressed,
after the state is set as PRESS they can only be removed.
- Basic text selection
- Text selection works properly according to the core code
- Text selection in an input box has several bugs
- If a key is pressed when input text is selected it does not
replace that text (nothing happens).
- If a selection is clicked an "invisible" caret is placed there
and any text typed after that will appear between the
selection without
clearing it.
- When a selection is made Cut and Paste are insensitive. (This is
probably due to some oversight of mine about caret placement)
- Drag saving/copying the selection isn't implemented yet
- Clipboard functions for both the url entry bar and the browser
itself (except for cut)
- Copy and Paste work as expected.
- Added a few functions to gtk_scaffolding to handle clipboard action
sensitivities<http://library.gnome.org/devel/hig-book/stable/controls-sensitivity.html.en>correctly
- After testing several different methods to keep track of the
selection states in both the browser and the url bar, I came to the
conclusion that there was no clean and concise way to accomplish this
(mostly because there is no
signal<http://library.gnome.org/devel/gobject/unstable/signal.html>emitted
when a selection is created in the url bar). I consulted the
Epiphany <http://www.gnome.org/projects/epiphany/> code to see how
they handled this problem. Instead of constantly keeping track
of whether or
not the actions should be sensitive, they only update sensitivity when
necessary so...
- Clipboard actions are only set as insensitive when a menu containing
them (Edit menu / Right-click menu) is displayed, otherwise they
are always
sensitive so that the keyboard accelerators will always be detected
properly. The functions themselves handle instances where the clipboard
function is not possible.
- These actions are hidden instead of desensitized for the right-click
menu.
- Cut works for the url bar but not yet for the browser as there does
not seem to be an easily accessible cut function in the core.
Future goals for next week:
- Fix bugs with selections in a text input which are most likely related
to caret placement.
- Find a way to better implement desktop/text_input.c textarea_cut in
order to add cut functionality.
- Absolutely solidify mouse behavior once and for all.
- Move on to either a find toolbar or Downloads.
Things I need help with:
- Questions, concerns, comments, and suggestions about the mouse behavior
so that we can get it squared away and won't have to deal with it later.
- Does the RiscOS version have the same trouble with selections in a text
input or did I overlook some important caret placement code on the gtk side
of things (as I highly suspect).
- Schedule concerns. Should I:
- Continue to work on text-selection/clipboard bugs even though it
will take me past schedule.
- Begin work on a Find toolbar which was originally part of the
schedule for early June
- Or move on ahead to Downloads?
Thanks to tlsa for helping me pretty much constantly with mouse behavior,
and to jmb for your helpful hints about current_redraw_browser, and of
course rjek, my faithful mentor ; )
Mike
(Additional updates of a less formal nature can be found on my GSoC blog at
mikelgsoc.blogspot.com <http://www.mikelgsoc.blogspot.com> which is
currently aggregated at planet.netsurf-browser.org.)
14 years, 11 months
GSoC Progress Report 06/15/08
by Sean Fox
Hi all,
Due to work-related problems I've had a tough time getting started on
my project. However, all issues have been resolved in that respect
and I intend to really get rolling over the next few weeks. The
probable libraries outlined in my original application included CSS,
Layout, NetSurf, BMP, and GIF libraries.
This week I began with the GIF library as follows:
> Familiarizing myself with the GIF code and how it works within NetSurf
> Moving all necessary code from NetSurf's "gifread" files to an external location in my branch
> Callback functions were implemented to allow operation of libnsgif to go smoothly regardless of the bitmap functions the caller chooses to use
> Test-case was provided in my branch (nsgiftest) where libnsgif is used outside of NetSurf (although the code is entirely based on NetSurf's implementation)
The majority of the work was bringing myself back up to speed in C
programming, moving things around, and familiarizing myself with how
NetSurf implements images and other data (fetchers and the like).
Luckily understanding the fetchers should come in handy later when I
get to Layout and NetSurf libraries.
Things I need help with:
> Testing of libnsgif under RiscOS to ensure proper functionality
> Review of the code submitted thus far to catch any mistakes/bugs
> Comments, ideas, and criticisms as far as API, naming conventions, etc. regarding this library (especially since this is the first library; I can follow a similar path as I move on to the next ones)
Agenda for next week:
> Complete the BMP library from start to finish (should be similar to the GIF library, so I'm hoping this will be quick)
> Begin to grasp the code behind NetSurf's layout/HTML rendering to start the Layout library
Comments and concerns:
I spoke with John-Mark regarding the CSS library and--all though he
left it up to me--I think we came to an agreement that it would be in
our best interest for me to begin work on the Layout and NetSurf
libraries prior to the CSS work. Because he is currently looking at
developing a new CSS parser, it would be more effective if I tackle
the other projects at my disposal. Any and all comments and concerns
from the development team are welcome. I'd especially love to hear
more regarding the API; the code I've published thus far is really
just a "rough draft" to be molded by everyone.
Sean
14 years, 11 months
GSoC report
by Adam Blokus
Hi everyone,
This weeks goals were:
> - adjust the width of the content to the width of the page, not keeping it
> at window-width, and
Done, but still causing issue 1) from below and without handling of issue 2)
> - make the pdf-plotting output the whole document, not just the first page
Done
> - get some idea of how to fix the problems that are reported for the
> current riscos printing (as single lines of text being split horizontally
> on the edge of pages)
I discuss it as issue 3)
Also - I have created a conception of a printer interface, that will allow to
deal with any later paged-output formats - either a printer or another file
format. By now it implies adding at least four new files to the sources. I am
still not sure if I'm fitting with what I'm doing in your conventions for
locations and naming, but I hope that I'm at least close to them ;)
Although it is working, I see the code done this week mainly as a
conception-sketch or a skeleton to fill with further changes - most
importantly separating the window's content from anything that printing does.
By now it is working similar to the RiscOS printing that is included in
NetSurf - it is just spread into some functions with the improvements to be
done pointed out in the comments.
I have been going through some problems with my IP - that is why I don't show
up at #netsurf lately - I am available via email (I check it at least once a
day), in case you would like to contact me.
The current issues that I see also as guidelines for the following week or two
are:
1) Window content is affected by printing:
That's what took me a lot of time - digging through the sources to get a
better understanding of all the functions related to my ideas for solving
this issue. Firstly, I wanted to just block redrawing the window content for
the time printing is done (or redraw the window from a pixbuf it would be
previously stored into). This solution would allow to save some memory
otherwise needed to duplicate the resized content (duplication would probably,
but not necessarily require in most cases more memory than a
pixbuff-screenshot). Now I believe, that at least the box-tree should be
duplicated, so we could later modify the presentation more dramatically, when
printing stylesheets will be taken into consideration. I am still thinking at
which moment the duplication should be done ( surely not refetching, most
surely building a box tree, maybe doing it by dfs-walking through the
window-content's boxes and duplicating them - but this wouldn't allow us to
apply printstyles, too).
I must admit that I am under greatly influenced by your "Memory is valuable"
guideline for RiscOS and don't yet have any intuition of what amounts of it
are vital and how much I should be careful of it;)
2) Content to wide to fit a page is truncated at page width:
After looking how other browsers (mainly Fx and Konqueror) handle it, I found
that fitting content into pages is done in most cases by making the layout
very loose (for a good example - see print previews of :
http://www.derrickcomedy.com/videos/ ) . Not so wide sites can be handled
with scaling the page a little ( as Konqueror does with
www.netsurf-browser.org/downloads, that has a loose layout in Firefox), but
this is not a general solution. Most probably I will have to go into the
layouting code and add some paged alternatives to it.
3) Text is cut on edges of pages:
Here I see two solutions, and I want to try both of them. Firstly, for
reasonable small text - a kind off 'fuzzy' clipping. That is - apply two
bottom margin lines: one impassable for every kind of content and one
conventional. If the plotted text does not reach behind the first one - it
should be plotted uncut. If it would be drawn even behind the impassable
margin - we raise the bottom clipping line for this page just above this text
and print it on the top of the next page.
For bigger text sizes this would result in permanently rising the lower
margin, so I think that such text should either be treated as a "cuttable"
element or, if it is not really huge and it's densely enough surrounded by
other elements - this could be solved as a case for the mentioned earlier
loose layout. This will require a little research, and that's why I want to
start with writing the 'fuzzy' clipping this week.
Adam
14 years, 11 months
Request to advertise Net-Surf on RoC (fwd)
by John-Mark Bell
FYI. I've not replied.
J.
---------- Forwarded message ----------
Date: Fri, 13 Jun 2008 09:12:21 +0100 (BST)
From: mhh(a)shrewsbury.org.uk
To: jmb(a)netsurf-browser.org
Subject: Request to advertise Net-Surf on RoC
John,
I'd like to place an advert for netsurf on
http://www.riscoscode.com
I was charging £10 but someone has pointed out to me that there are many worthwhile projects that are not about making
money and it would be unreasonable to expect them to pay to advertise.
So, would you like a FREE advert on the site ?
What I'm after is a png measuring about 152 wide by 266 high.
The height is less important than the width.
I just grabbed the latest version of netsurf, so that's certainly worth £10! :-)
Regards,
Martin Hansen
14 years, 12 months
Netsurf on Linux PDA (ipaq h2200 + Angstrom)
by Patricio Rossi
Hello there, that is really amazing, I have a 2215 and I try to put
angstrom on it but no luck, I only got a black screen at boot time for
opie or a 100% cpu usage for GPE after boot :( I'm using familiar linux
now :P
How I can compile stuff on my linux box for the ipaq h2200?
Thanks
14 years, 12 months
My GSoC - progress report
by Adam Blokus
Hi,
During this week I have finished bitmap plotting:
- embedding png and jpeg bitmaps in pdf files
- adding other types of images as a rgb pixmap + extracting the alpha channel
into a soft mask, to make transparency work
- adding tiled bitmaps (executing the same image a given number of times)
I also changed the formatting of my code a little, as it was requested by some
of you - thanks for all your feedback in this field. My former programming
experience was mainly quite unorganized and I'd like myself to improve and
learn something also in such categories as coding style.
In terms of stuff other than code I have studied a lot the pdf specs (to find
out how the softmask should work and how supporting unicode could be done),
the part of NetSurf responsible for the layout of the web pages - to prepare
myself for making it work for paged output - and a lot of Haru sources (for
the smask, embedding images and to get some idea of making it a little more
error-safe).
For the following week I plan to make the output in pages work - that is:
- adjust the width of the content to the width of the page, not keeping it at
window-width, and
- make the pdf-plotting output the whole document, not just the first page
- get some idea of how to fix the problems that are reported for the current
riscos printing (as single lines of text being split horizontally on the edge
of pages)
Now I see some ways of coding it :
1)re-layouting the current content with blocked window redrawing for a
moment - or with a dummy one, that would show something like a "screenshot"
of the content displayed when the pdf export started - something similar to
the one done now in the riscos version
2) copying the whole content and working on it as if it were the same site
opened in another window
3) (not sure if it isn't the same as 2) making a new content with a copy of
the layout related stuff, but where possible with members shared with the
original content (same pointers in the struct) - I found that talloc has
means to handle adding more 'parent' elements of a struct, so this could be
also doable quite easy
By now - 1) has been complained about (in context of the riscos'
implementation) 2) would be the easiest one 3) would allow us to save some
memory by not doubling some data (I'm not sure yet if it would be any
interesting amount or if it's just me who hasn't found it already working for
the same site opened in many windows)
Also, I want to get a little more into error reporting and type checking in
Haru to improve my changes and make them "submittable" and send them as
patches to Haru.
Because I have some personal stuff to handle until Tuesday (I'm moving to
another place), I think I will move reporting my next week to the end of the
next Sunday(the 15th), so I can accomplish my goals. Hope, that this is not a
problem for you.
Adam
PS. One more good news - there is one guy who has reported that he started
working on unicode for Haru :d
15 years
Re: r4263 adamblokus - in /branches/adamblokus/netsurf: gtk/gtk_bitmap.c pdf/pdf_plotters.c
by John Tytgat
In message <courier.4847DA11.00003E9E(a)atlanta.semichrome.net> you wrote:
> Author: adamblokus
> Date: Thu Jun 5 07:20:32 2008
> New Revision: 4263
>
> URL: http://source.netsurf-browser.org?rev=4263&view=rev
> Log:
> Added hadling images other than png and jpeg - with transparency.
>
> + rgb_buffer = (char*)malloc(3*img_width*img_height);
> + alpha_buffer = (char*)malloc(img_width*img_height);
> + if(!(rgb_buffer&&alpha_buffer)){
> + LOG(("Not enough memory?"));
> + return false;
> + }
Minor comment: the if() body needs "free(rgb_buffer); free(alpha_buffer);"
as one of the two calls can fail but not necessary the other one.
John.
--
John Tytgat
joty(a)netsurf-browser.org
15 years
Haru PDF Library
by Adam Blokus
This mail didn't get through yesterday - I think because of the attachments.
Both files can be found in my branch (branches/adamblokus/harutest/)
-------
Hi,
I wrote a small example of the usage of the pdf-generating library I'd like to
use(http://libharu.org/wiki/Main_Page). The code generates a pdf file (also
attached) with some graphic primitives and an image.
The provided functionality is enough to cover all the plotting functions of
the plotters_table interface. There may be some problems:
- with font handling (didn't work with some fonts for me)
- with font encoding - (as I understand it: there is some unicode support with
chinese and japanese characters, but multibyte encoding seems not to be
supported in general; this can be done by converting from utf8 to someting
else first? )
- with images - there is support for png,raw and jpeg, but,as rjek pointed
out-not for gif (this one can be added basing on what we already have in
NetSurf)
There are still some developers working on this library, so these issues (esp.
the first one, the second and third can be worked around in some ways) should
be fixed someday. By now there are not critical obstacles.
Other possible choices, that I see, are:
- using cairo (has pdf surfaces, but with more limited functionality; rjek
complained about its dependiences)
- making a .ps and convert it with GS
- hacking some sources of a program with a suitable licence to extract the
needed functions
- finding another library ( although I made some research when writing my
proposal and during the last few days and I didn't find anything better )
So I am proposing to use Haru - what do you think about it?
Adam
15 years