Christians Tagebuch A PHP4 user in 2014 (15.5.2014, 20:44)

Today I stumbled on a bug report for the Mail_Mime package in PEAR: Bug #20222: 1.8.8 not compatible with PHP4 .

So in March 2014, someone noticed that a minor version upgrade of a package broke it on PHP4. Whoa.

But PEAR takes backwards compatibility very serious , so this bug was fixed.

The PEAR version naming standard looks similar to semantic versioning. Just remember that PEAR already decided on 2004-11-21 to follow this rules.

PHP_CodeSniffer 2.0.0a2 released (1.5.2014, 03:41)

I've just released the second alpha of PHP_CodeSniffer version 2.0.0. This update brings a new type of report performance improvements and Phar distribution for each download and testing. Information Report PHP_CodeSniffer now comes with an information report that is able to show you information about how your code is...

PHP_CodeSniffer 2.0.0 alpha1 released (5.2.2014, 03:11)

I've just released the first alpha of PHP_CodeSniffer version 2.0.0. This update brings an often requested feature; the ability for PHP_CodeSniffer to automatically fix the problems that it finds. It also contains a complete rewrite of the comment parsing sniffs finally removing what I feel is the poorest code...

Christians Tagebuch PEAR on PHP 5.5: could not extract package.xml (24.1.2014, 05:38)

I recently upgraded my work computer from Ubuntu 12.04 to Ubuntu 13.10. Trying to upgrade a pear package, I got the following error:

$ pear upgrade http_request2
downloading HTTP_Request2-2.2.1.tgz ...
Starting to download HTTP_Request2-2.2.1.tgz (107,339 bytes)
.........................done: 107,339 bytes
could not extract the package.xml file from
Download of "pear/http_request2" succeeded, but it is not a valid package archive
Error: cannot download "pear/HTTP_Request2"
Download failed
upgrade failed

Ubuntu 13.10 ships with PHP 5.5.3, which changed the pack/unpack format strings a bit to align them to the Perl behavior. Unfortunately, this breaks backwards compatibility .

PEAR's Archive_Tar package used one of those now changed parameters and thus could not extract packages on PHP 5.5 until version 1.3.10. Version 1.3.11 fixes the issue and makes it compatible with 5.5

Now my problem was that the Ubuntu upgrade updated my PHP version, but not my manually managed PEAR installation. I thus had an old Archive_Tar version that did not work anymore with the new PHP version.

Luckily, fixing that issue was easy; I simply had to download and apply the patch :

$ pear info archive_tar|head -n1
$ cd `pear config-get php_dir`
$ wget -O /tmp/archive.diff ""
$ patch -p1 < /tmp/archive.diff
$ pear upgrade-all
... works

Christians Tagebuch Web Linking support in PEAR (10.10.2013, 14:20)

PEAR's HTTP2 package got Web Linking (RFC 5988) support in version 1.1.0.

Parsing HTTP Link: header values is now easy:

; rel="webmention"';

$http = new HTTP2();
$links = $http->parseLinks($link);

It will give you the following output:

array(1) {
  [0] => array(2) {
    '_uri' => string(34) ""
    'rel' => array(1) {
      [0] => string(10) "webmention"

HTTP link headers are used to express relations of the resource to other URIs, e.g. copyright info or prev/next links of a paged result.

Apart from the URI, link headers may contain a number of attributes (parameters) . Here are some of them:

Relation of the URI to the current resource, e.g. "copyright", "index", "next" or "stylesheet". See the list of registered relations .
MIME type of the URI. Can be used to link to alternate formats of the current resource.
Human readable title of the link

I implemented the HTTP2::parseLinks() method because web linking is used by WebMention to detect the URL of the linkback server.

PHP_CodeSniffer 1.4.7 and 1.5.0RC4 released (26.9.2013, 00:39)

PHP_CodeSniffer versions 1.4.7 and 1.5.0RC4 have just been uploaded to PEAR and are now available to install. Version 1.4.7 is primarily a bug fix release but also contains a new JUnit report format a few new sniff settings and a change to the PSR2 standard based on recently added...

Christians Tagebuch Net_Webfinger 0.3.0 released (9.8.2013, 04:45)

Webfinger - a way to discover information about people by just their email address - changed quite a bit since I wrote the first version of Net_WebFinger, a PHP library to do this discoveries.


The now 13th iteration of the spec got rid of RFC 6415, requiring only a single HTTP request to fetch the information:

The default serialization format now is JRD, the JSON version of XRD.

CORS is now mandatory, so that web-applications can fetch the files, too.

Package releases

To accommodate these changes, I released version 0.3.0 of Net_WebFinger, together with version 0.3.0 of XML_XRD that is used to parse the underlying XRD/JRD files.

I also took the time to update Net_WebFinger's and XML_XRD's documentation .

Net_Webfinger now supports the new Webfinger draft, but is still able to fall back to the old system - many providers, Google among them, didn't make the switch yet.

XML_XRD fully supports reading and writing JRD files now.

Happy discovery.

PHP_CodeSniffer 1.4.6 and 1.5.0RC3 released (25.7.2013, 05:10)

PHP_CodeSniffer versions 1.4.6 and 1.5.0RC3 have just been uploaded to PEAR and are now available to install. Version 1.4.6 is primarily a bug fix release but also contains a new JSON report format a huge number of sniff docs and a few new sniffs (mostly in the Squiz standard)...

Christians Tagebuch PHP: HTTP content negotiation (15.7.2013, 20:05)

HTTP requests contain header that explain which data the client accepts and is able to understand: Type of the content (Accept), language (Accept-Language), charset and compression (encoding).

By leveraging this header values, your web application can automatically deliver content in the correct language. Using content types in the Accept headers, your REST API doesn't need versioned URLs but can react differently on the same URL.

Header value structure

Acceptance headers are comma-separated list of values with optional extension data. One additional data point - quality - determines a ranking order between the values.

Simple header

Accept: image/png, image/jpeg, image/gif

Here the HTTP client expresses that he understands only content of MIME types image/png, image/jpeg and image/gif.


Accept: image/png, image/jpeg;q=0.8, image/gif;q=0.5

Both image/jpeg and image/gif have a quality value now. jpeg's 0.8 is higher than gif's 0.5, so jpeg is preferred over gif. image/png has no explicit quality value, so the default quality of 1 is used. This means that in the end, png is preferred over jpeg, which is preferred over gif.

So if the server has the data available in two formats .png and .jpeg, it should send the png file to the client.

Quality values may appear in any order:

Accept: image/gif;q=0.5, image/png, image/jpeg;q=0.8


Apart from the q quality extension, other tokens may be used:

Accept: text/html;video=0, text/html;q=0.9

In this example, the client prefers to get the HTML page without videos, but also falls back to the "normal" HTML page. (Note that this is an fictive example. There is no video token standardized anywhere.)

Parsing header values

Parsing and interpreting the Accept* headers is not simply an explode() call, but you also need to strip away the extensions and order the values by their quality.

Instead of implementing this all yourself, you can rely on a the stable and unit-tested library HTTP2 from PEAR.

Installation is simple:

$ pear install HTTP2-beta

To use it, simply require HTTP2.php:

require_once 'HTTP2.php';


The PHP Extension Community Library has an extension pecl_http which provides functions for HTTP content negotiation.

Truncated by Planet-PEAR, read more at the original (another 6159 bytes)

Christians Tagebuch PHP: Determine absolute link URLs (3.7.2013, 18:13)

When parsing HTML and following links, it is necessary to calculate absolute URLs from the href attribute values in and tags.

Link classes

Different types of link classes may occur in an HTML document:

Absolute URL
An URL with scheme (protocol), host and path.
Absolute URL without scheme
The scheme is missing, but host and path are given. The document's protocol has to be used in this case, according to RFC 3986 section 4.2 and section 5.2.2.
Path-absolute URL without host
Scheme, hostname and port are missing - only an absolute path is given.
Relative path
A simple relative path.
Fragment only
An anchor with a hash sign in front. Links to another section in the same document.

To resolve those URLs, you need both the document URL and the link href value.


Implementing the whole resolving algorithm is tedious, and you don't have to do it yourself. There are several implementations out there.


PEAR offers the Net_URL2 package. Its resolve() method implements the procedure properly, is unit-tested and has no other dependencies. Example:

// $abs is ''

Absolute URL deriver

absolute-url-deriver is a small composer-installable lib for resolving relative URLs.

While this library consists of one file only, it depends on another lib (much larger) that provides URL handling.

Empty URLs

HTML5 allows empty action attributes in

tags. Both libraries listed above cope with that; they return the source URL when the "target" URL is empty.

Base href

HTML documents may have a tag in their head section. When resolving links, you need to use this one instead of the document's URL itself. See my XPath article for more information about extracting attribute values from HTML.

Links RSS 0.92   RDF 1.
Atom Feed  
PHP5 powered   PEAR
Link the Planet <a href="">Planet PEAR</a>