PHP_CodeSniffer 2.0.0 alpha1 released (5.2.2014, 03:11)

I've just released the first alpha of PHP_CodeSniffer version 2.0.0. This update brings an often requested feature; the ability for PHP_CodeSniffer to automatically fix the problems that it finds. It also contains a complete rewrite of the comment parsing sniffs finally removing what I feel is the poorest code...

Link
Christians Tagebuch PEAR on PHP 5.5: could not extract package.xml (24.1.2014, 05:38)

I recently upgraded my work computer from Ubuntu 12.04 to Ubuntu 13.10. Trying to upgrade a pear package, I got the following error:

$ pear upgrade http_request2
downloading HTTP_Request2-2.2.1.tgz ...
Starting to download HTTP_Request2-2.2.1.tgz (107,339 bytes)
.........................done: 107,339 bytes
could not extract the package.xml file from
"/tmp/pear/install/HTTP_Request2-2.2.1.tgz"
Download of "pear/http_request2" succeeded, but it is not a valid package archive
Error: cannot download "pear/HTTP_Request2"
Download failed
upgrade failed

Ubuntu 13.10 ships with PHP 5.5.3, which changed the pack/unpack format strings a bit to align them to the Perl behavior. Unfortunately, this breaks backwards compatibility .

PEAR's Archive_Tar package used one of those now changed parameters and thus could not extract packages on PHP 5.5 until version 1.3.10. Version 1.3.11 fixes the issue and makes it compatible with 5.5

Now my problem was that the Ubuntu upgrade updated my PHP version, but not my manually managed PEAR installation. I thus had an old Archive_Tar version that did not work anymore with the new PHP version.

Luckily, fixing that issue was easy; I simply had to download and apply the patch :

$ pear info archive_tar|head -n1
ABOUT PEAR.PHP.NET/ARCHIVE_TAR-1.3.8
$ cd `pear config-get php_dir`
$ wget -O /tmp/archive.diff "https://pear.php.net/bugs/patch-download.php?id=19746&patch=archive_tar_php55.patch&revision=1355241213"
$ patch -p1 < /tmp/archive.diff
$ pear upgrade-all
... works

Link
Christians Tagebuch Web Linking support in PEAR (10.10.2013, 14:20)

PEAR's HTTP2 package got Web Linking (RFC 5988) support in version 1.1.0.

Parsing HTTP Link: header values is now easy:

; rel="webmention"';

$http = new HTTP2();
$links = $http->parseLinks($link);
var_dump($links);
?>

It will give you the following output:

array(1) {
  [0] => array(2) {
    '_uri' => string(34) "http://pear.php.net/webmention.php"
    'rel' => array(1) {
      [0] => string(10) "webmention"
    }
  }
}

HTTP link headers are used to express relations of the resource to other URIs, e.g. copyright info or prev/next links of a paged result.

Apart from the URI, link headers may contain a number of attributes (parameters) . Here are some of them:

rel
Relation of the URI to the current resource, e.g. "copyright", "index", "next" or "stylesheet". See the list of registered relations .
type
MIME type of the URI. Can be used to link to alternate formats of the current resource.
title
Human readable title of the link

I implemented the HTTP2::parseLinks() method because web linking is used by WebMention to detect the URL of the linkback server.

Link
PHP_CodeSniffer 1.4.7 and 1.5.0RC4 released (26.9.2013, 00:39)

PHP_CodeSniffer versions 1.4.7 and 1.5.0RC4 have just been uploaded to PEAR and are now available to install. Version 1.4.7 is primarily a bug fix release but also contains a new JUnit report format a few new sniff settings and a change to the PSR2 standard based on recently added...

Link
Christians Tagebuch Net_Webfinger 0.3.0 released (9.8.2013, 04:45)

Webfinger - a way to discover information about people by just their email address - changed quite a bit since I wrote the first version of Net_WebFinger, a PHP library to do this discoveries.

Changes

The now 13th iteration of the spec got rid of RFC 6415, requiring only a single HTTP request to fetch the information:

http://example.org/.well-known/webfinger?resource=acct:bob@example.org

The default serialization format now is JRD, the JSON version of XRD.

CORS is now mandatory, so that web-applications can fetch the files, too.

Package releases

To accommodate these changes, I released version 0.3.0 of Net_WebFinger, together with version 0.3.0 of XML_XRD that is used to parse the underlying XRD/JRD files.

I also took the time to update Net_WebFinger's and XML_XRD's documentation .

Net_Webfinger now supports the new Webfinger draft, but is still able to fall back to the old system - many providers, Google among them, didn't make the switch yet.

XML_XRD fully supports reading and writing JRD files now.

Happy discovery.

Link
PHP_CodeSniffer 1.4.6 and 1.5.0RC3 released (25.7.2013, 05:10)

PHP_CodeSniffer versions 1.4.6 and 1.5.0RC3 have just been uploaded to PEAR and are now available to install. Version 1.4.6 is primarily a bug fix release but also contains a new JSON report format a huge number of sniff docs and a few new sniffs (mostly in the Squiz standard)...

Link
Christians Tagebuch PHP: HTTP content negotiation (15.7.2013, 20:05)

HTTP requests contain header that explain which data the client accepts and is able to understand: Type of the content (Accept), language (Accept-Language), charset and compression (encoding).

By leveraging this header values, your web application can automatically deliver content in the correct language. Using content types in the Accept headers, your REST API doesn't need versioned URLs but can react differently on the same URL.

Header value structure

Acceptance headers are comma-separated list of values with optional extension data. One additional data point - quality - determines a ranking order between the values.

Simple header

Accept: image/png, image/jpeg, image/gif

Here the HTTP client expresses that he understands only content of MIME types image/png, image/jpeg and image/gif.

Quality

Accept: image/png, image/jpeg;q=0.8, image/gif;q=0.5

Both image/jpeg and image/gif have a quality value now. jpeg's 0.8 is higher than gif's 0.5, so jpeg is preferred over gif. image/png has no explicit quality value, so the default quality of 1 is used. This means that in the end, png is preferred over jpeg, which is preferred over gif.

So if the server has the data available in two formats .png and .jpeg, it should send the png file to the client.

Quality values may appear in any order:

Accept: image/gif;q=0.5, image/png, image/jpeg;q=0.8

Extensions

Apart from the q quality extension, other tokens may be used:

Accept: text/html;video=0, text/html;q=0.9

In this example, the client prefers to get the HTML page without videos, but also falls back to the "normal" HTML page. (Note that this is an fictive example. There is no video token standardized anywhere.)

Parsing header values

Parsing and interpreting the Accept* headers is not simply an explode() call, but you also need to strip away the extensions and order the values by their quality.

Instead of implementing this all yourself, you can rely on a the stable and unit-tested library HTTP2 from PEAR.

Installation is simple:

$ pear install HTTP2-beta

To use it, simply require HTTP2.php:

require_once 'HTTP2.php';

pecl_http

The PHP Extension Community Library has an extension pecl_http which provides functions for HTTP content negotiation.

Truncated by Planet-PEAR, read more at the original (another 6159 bytes)

Link
Christians Tagebuch PHP: Determine absolute link URLs (3.7.2013, 18:13)

When parsing HTML and following links, it is necessary to calculate absolute URLs from the href attribute values in and tags.

Link classes

Different types of link classes may occur in an HTML document:

Absolute URL
http://example.org/foo.html
An URL with scheme (protocol), host and path.
Absolute URL without scheme
//example.org/foo.html
The scheme is missing, but host and path are given. The document's protocol has to be used in this case, according to RFC 3986 section 4.2 and section 5.2.2.
Path-absolute URL without host
/path/to/file.html
Scheme, hostname and port are missing - only an absolute path is given.
Relative path
../foo/bar.html
A simple relative path.
Fragment only
#baz
An anchor with a hash sign in front. Links to another section in the same document.

To resolve those URLs, you need both the document URL and the link href value.

Code

Implementing the whole resolving algorithm is tedious, and you don't have to do it yourself. There are several implementations out there.

Net_URL2

PEAR offers the Net_URL2 package. Its resolve() method implements the procedure properly, is unit-tested and has no other dependencies. Example:

resolve('../baz.jpg');
// $abs is 'http://example.org/baz.jpg'
?>

Absolute URL deriver

absolute-url-deriver is a small composer-installable lib for resolving relative URLs.

While this library consists of one file only, it depends on another lib (much larger) that provides URL handling.

Empty URLs

HTML5 allows empty action attributes in

tags. Both libraries listed above cope with that; they return the source URL when the "target" URL is empty.

Base href

HTML documents may have a tag in their head section. When resolving links, you need to use this one instead of the document's URL itself. See my XPath article for more information about extracting attribute values from HTML.

Link
Christians Tagebuch PHP_CodeSniffer + emacs 24: Compilation error (6.5.2013, 08:25)

After ugprading to Ubuntu 13.04, emacs did not highlight compilation results from PHP_CodeSniffer anymore. Instead of a nicely colored log window, had the following in the *Messages* buffer:

Warning: defvar ignored because compilation-error-regexp-alist is let-bound
(No files need saving)
Error during redisplay: (void-variable compilation-error-regexp-alist) [4 times]
Compilation exited abnormally with code 1

A bug report gave me the hint about what to change:

To have the compilation mode variables globally available, I need to require the module in global scope, not in the scope of my phpcs-compilation definition.

So after adding (require 'compile) before (defun phpcs() .. in my .emacs file, compilation works properly again.

Link
PHP_CodeSniffer 1.4.5 and 1.5.0RC2 released (3.4.2013, 22:55)

PHP_CodeSniffer versions 1.4.5 and 1.5.0RC2 have just been uploaded to PEAR and are now available to install. Version 1.4.5 is primarily a bug fix release although there are a few new sniffs and sniff settings that some developers may find useful. In addition to these changes 1.5.0RC2 contains big...

Link
Links RSS 0.92   RDF 1.
Atom Feed  
PHP5 powered   PEAR
Link the Planet <a href="http://www.planet-pear.org/">Planet PEAR</a>