Firstly thank you for doing this review and explaining in detail. There are a
few points that I'd like to make in response.
We have come to understand that the curl library simply should not be
relied upon beyond its http/https implementations. We have implemented
fetchers directly for things like file: explicitly.
My experience with libcurl remains that it can be relied on for all protocols
Because the curl libraries in actual common use have numerous bugs
unwanted behaviours (an example would be curl applies proxy settings
to file: urls) and the huge memory footprint of curl *in our use case*
it looks increasingly like a poor fit.
I haven't seen numerous bugs and have found libcurl to be highly responsive in
fixing any that are found, or accepting my patches to do so.
Because of this we have gone to great efforts to *remove* fetcher
support from the curl library. As discussed at the develoepr meetings
previously Daniel Silverstone is working on creating an http library
to eventually completely remove our dependancy on curl.
Overall, in my opinion, libcurl is the highest quality external library that
we use. It is well managed and has saved us very much work. I consider it a
valued partner of NetSurf, not just a library.
Nevertheless, I will support any developer wishing to write our own http
implementation (and gopher implementation, for example). Keeping libcurl as an
option may be ideal however.
As mentioned to you, The way we have chosen for things that generate
output (primarily directory listings but the about: handler also) that
the browser should display as HTML is that the *fetcher* layer returns
the data as generated HTML. We have created helper functions for this
generation for directory listings.
I'm sorry, but I must have missed the discussion that resulted in this
decision. There may be some ambiguity over what is meant by the fetcher layer
If the formatted output was not to your liking these content
generation routines should have been extended and the formatitng
applied using the internal.css.
The method of inventing a content type and content handler which then
rummages around inside a different content handler is full of
problems. Indeed we have worked to completely remove this method of
operation and built a data model where it is not supported.
I agree that this solution is not ideal.
We probably need to think about this area more. For example, should there be
something between fetching (where I mean the actual protocol stuff) and the
content handling? Or can we extract the HTML logic from the HTML content code
to make it reusable? Or should we add a way to switch content types?
right there is why you have got no other reviews by the way, not
because it *is* a hack but because you knew it and did nothing about
it, or at least provided a rational explanation.
I was intending to do a review but haven't had time yet.
> +nserror gopher_create(const content_handler *handler,
> + lwc_string *imime_type, const http_parameter *params,
> + llcache_handle *llcache, const char *fallback_charset,
> + bool quirks, struct content **c);
> +bool gopher_convert(struct content *c);
> +content_type gopher_content_type(lwc_string *mime_type);
> +static char *gen_nice_title(const char *path);
> +static bool gopher_generate_top(char *buffer, int buffer_length);
> +static bool gopher_generate_title(const char *title, char *buffer, int
> +static bool gopher_generate_row(const char **data, size_t *size,
> + char *buffer, int buffer_length);
> +static bool gopher_generate_bottom(char *buffer, int buffer_length);
> +static const char gopher_faked_type = "text/x-gopher-directory";
> +static lwc_string *gopher_faked_mime_type;
as a comment we are trying to move away from forward declaration,
secondly every function should be doccommented somewhere. if its
static, in its scope here. If its external a comment saying in which
header it is docuemnted.
This is my fault. I asked for it to be done like this, because it remains the
de-facto standard for everything in render/. It has the advantage of putting
the content callbacks first.