Codelog PHP5+ Mail, Net_Socket and Net_SMTP (4.6.2014, 06:12)

Mail is one of the most popular PEAR packages still, but a little bit aged.

In order to break the back of it being something a little more modern, I forked it and the dependencies to PHP5 compatibility.

We've now got:

  • https://github.com/pear/Mail2 - Not yet released, I figure I'll give it a few days to bake
  • https://github.com/pear/Net_SMTP2 - Released as 0.1.0
  • https://github.com/pear/Net_Socket2 - Released as 0.1.0
If you use any of these components and are doing nasty E_STRICT error notification suppression, you can pretty much drop in the new version.

The key differences include Exceptions being raised, PHP5 syntax and no more E_DEPRECATED etc notices.

Link
Christians Tagebuch A PHP4 user in 2014 (15.5.2014, 20:44)

Today I stumbled on a bug report for the Mail_Mime package in PEAR: Bug #20222: 1.8.8 not compatible with PHP4 .

So in March 2014, someone noticed that a minor version upgrade of a package broke it on PHP4. Whoa.

But PEAR takes backwards compatibility very serious , so this bug was fixed.

The PEAR version naming standard looks similar to semantic versioning. Just remember that PEAR already decided on 2004-11-21 to follow this rules.

Link
PHP_CodeSniffer 2.0.0a2 released (1.5.2014, 03:41)

I've just released the second alpha of PHP_CodeSniffer version 2.0.0. This update brings a new type of report performance improvements and Phar distribution for each download and testing. Information Report PHP_CodeSniffer now comes with an information report that is able to show you information about how your code is...

Link
PHP_CodeSniffer 2.0.0 alpha1 released (5.2.2014, 03:11)

I've just released the first alpha of PHP_CodeSniffer version 2.0.0. This update brings an often requested feature; the ability for PHP_CodeSniffer to automatically fix the problems that it finds. It also contains a complete rewrite of the comment parsing sniffs finally removing what I feel is the poorest code...

Link
Christians Tagebuch PEAR on PHP 5.5: could not extract package.xml (24.1.2014, 05:38)

I recently upgraded my work computer from Ubuntu 12.04 to Ubuntu 13.10. Trying to upgrade a pear package, I got the following error:

$ pear upgrade http_request2
downloading HTTP_Request2-2.2.1.tgz ...
Starting to download HTTP_Request2-2.2.1.tgz (107,339 bytes)
.........................done: 107,339 bytes
could not extract the package.xml file from
"/tmp/pear/install/HTTP_Request2-2.2.1.tgz"
Download of "pear/http_request2" succeeded, but it is not a valid package archive
Error: cannot download "pear/HTTP_Request2"
Download failed
upgrade failed

Ubuntu 13.10 ships with PHP 5.5.3, which changed the pack/unpack format strings a bit to align them to the Perl behavior. Unfortunately, this breaks backwards compatibility .

PEAR's Archive_Tar package used one of those now changed parameters and thus could not extract packages on PHP 5.5 until version 1.3.10. Version 1.3.11 fixes the issue and makes it compatible with 5.5

Now my problem was that the Ubuntu upgrade updated my PHP version, but not my manually managed PEAR installation. I thus had an old Archive_Tar version that did not work anymore with the new PHP version.

Luckily, fixing that issue was easy; I simply had to download and apply the patch :

$ pear info archive_tar|head -n1
ABOUT PEAR.PHP.NET/ARCHIVE_TAR-1.3.8
$ cd `pear config-get php_dir`
$ wget -O /tmp/archive.diff "https://pear.php.net/bugs/patch-download.php?id=19746&patch=archive_tar_php55.patch&revision=1355241213"
$ patch -p1 < /tmp/archive.diff
$ pear upgrade-all
... works

Link
Christians Tagebuch Web Linking support in PEAR (10.10.2013, 14:20)

PEAR's HTTP2 package got Web Linking (RFC 5988) support in version 1.1.0.

Parsing HTTP Link: header values is now easy:

; rel="webmention"';

$http = new HTTP2();
$links = $http->parseLinks($link);
var_dump($links);
?>

It will give you the following output:

array(1) {
  [0] => array(2) {
    '_uri' => string(34) "http://pear.php.net/webmention.php"
    'rel' => array(1) {
      [0] => string(10) "webmention"
    }
  }
}

HTTP link headers are used to express relations of the resource to other URIs, e.g. copyright info or prev/next links of a paged result.

Apart from the URI, link headers may contain a number of attributes (parameters) . Here are some of them:

rel
Relation of the URI to the current resource, e.g. "copyright", "index", "next" or "stylesheet". See the list of registered relations .
type
MIME type of the URI. Can be used to link to alternate formats of the current resource.
title
Human readable title of the link

I implemented the HTTP2::parseLinks() method because web linking is used by WebMention to detect the URL of the linkback server.

Link
PHP_CodeSniffer 1.4.7 and 1.5.0RC4 released (26.9.2013, 00:39)

PHP_CodeSniffer versions 1.4.7 and 1.5.0RC4 have just been uploaded to PEAR and are now available to install. Version 1.4.7 is primarily a bug fix release but also contains a new JUnit report format a few new sniff settings and a change to the PSR2 standard based on recently added...

Link
Christians Tagebuch Net_Webfinger 0.3.0 released (9.8.2013, 04:45)

Webfinger - a way to discover information about people by just their email address - changed quite a bit since I wrote the first version of Net_WebFinger, a PHP library to do this discoveries.

Changes

The now 13th iteration of the spec got rid of RFC 6415, requiring only a single HTTP request to fetch the information:

http://example.org/.well-known/webfinger?resource=acct:bob@example.org

The default serialization format now is JRD, the JSON version of XRD.

CORS is now mandatory, so that web-applications can fetch the files, too.

Package releases

To accommodate these changes, I released version 0.3.0 of Net_WebFinger, together with version 0.3.0 of XML_XRD that is used to parse the underlying XRD/JRD files.

I also took the time to update Net_WebFinger's and XML_XRD's documentation .

Net_Webfinger now supports the new Webfinger draft, but is still able to fall back to the old system - many providers, Google among them, didn't make the switch yet.

XML_XRD fully supports reading and writing JRD files now.

Happy discovery.

Link
PHP_CodeSniffer 1.4.6 and 1.5.0RC3 released (25.7.2013, 05:10)

PHP_CodeSniffer versions 1.4.6 and 1.5.0RC3 have just been uploaded to PEAR and are now available to install. Version 1.4.6 is primarily a bug fix release but also contains a new JSON report format a huge number of sniff docs and a few new sniffs (mostly in the Squiz standard)...

Link
Christians Tagebuch PHP: HTTP content negotiation (15.7.2013, 20:05)

HTTP requests contain header that explain which data the client accepts and is able to understand: Type of the content (Accept), language (Accept-Language), charset and compression (encoding).

By leveraging this header values, your web application can automatically deliver content in the correct language. Using content types in the Accept headers, your REST API doesn't need versioned URLs but can react differently on the same URL.

Header value structure

Acceptance headers are comma-separated list of values with optional extension data. One additional data point - quality - determines a ranking order between the values.

Simple header

Accept: image/png, image/jpeg, image/gif

Here the HTTP client expresses that he understands only content of MIME types image/png, image/jpeg and image/gif.

Quality

Accept: image/png, image/jpeg;q=0.8, image/gif;q=0.5

Both image/jpeg and image/gif have a quality value now. jpeg's 0.8 is higher than gif's 0.5, so jpeg is preferred over gif. image/png has no explicit quality value, so the default quality of 1 is used. This means that in the end, png is preferred over jpeg, which is preferred over gif.

So if the server has the data available in two formats .png and .jpeg, it should send the png file to the client.

Quality values may appear in any order:

Accept: image/gif;q=0.5, image/png, image/jpeg;q=0.8

Extensions

Apart from the q quality extension, other tokens may be used:

Accept: text/html;video=0, text/html;q=0.9

In this example, the client prefers to get the HTML page without videos, but also falls back to the "normal" HTML page. (Note that this is an fictive example. There is no video token standardized anywhere.)

Parsing header values

Parsing and interpreting the Accept* headers is not simply an explode() call, but you also need to strip away the extensions and order the values by their quality.

Instead of implementing this all yourself, you can rely on a the stable and unit-tested library HTTP2 from PEAR.

Installation is simple:

$ pear install HTTP2-beta

To use it, simply require HTTP2.php:

require_once 'HTTP2.php';

pecl_http

The PHP Extension Community Library has an extension pecl_http which provides functions for HTTP content negotiation.

Truncated by Planet-PEAR, read more at the original (another 6159 bytes)

Link
Links RSS 0.92   RDF 1.
Atom Feed  
PHP5 powered   PEAR
Link the Planet <a href="http://www.planet-pear.org/">Planet PEAR</a>