Testing cron jobs in Magento

24 May 2017 in Magento

At first, find out how the cronjob is called from the module's config.xml by looking at the /config/crontab/jobs/*/run/model XPath, eg:

<config>
  ...
  <crontab> 
    <jobs> 
      <my_cronjob>
        <run>
          <model>my_cronjob/observer::my_method</model>
        </run>
        ...

On this example, you need to initialize the my_cronjob/observer model and call the method my_method on it. In order to do this create a file to run from command line with these contents:

<?php
// initialize magento application
require_once '/path/to/app/Mage.php';    
Mage::app();

// initialize model and run the method
$myCronJob = Mage::getModel('my_cronjob/observer');
$myCronJob->my_method();

Writing a Git commit message

22 May 2017 in Git

A good commit message should answer three questions, in order to establish the context the commit is applied:

  • Why is it necessary?
  • How does it address the issue?
  • What effects does it have?

Of course, there's no ideal commit message, but some general rules:

Use imperative statements in the subject line. For example, write "Fix broken link" instead of "Fixed broken link". Write them in order to complete the following sentence: If applied, this commit will Fix broken link

Use 50 characters or less in the subject line. Keeping subject lines under the 50 characters limit ensures that they are readable and concise. If it's difficult to summarize, it may be because it includes several logical changes or bug fixes. Also avoid lazy or general messages.

Do not end the subject line with a period. It's a title and titles don't end with a period.

Begin the subject line with capitalized text. Simple as that.

Separate subject from message body with a blank line. Mostly useful when browsing the log.

Wrap the message body within a 72 characters limit. Because conventions. And git log.

Explain what and why and not how. Show the intent and the context. The how is easy for anyone to see by simply doing a git diff.

Mention associated issues at the end of the body. This is usually the place to be.

(Some) References:

Replacing Disqus with Github Comments

21 May 2017 in Disqus

In this post at http://donw.io/post/github-comments/, Don Williamson describes his findings of replacing Disqus with GitHub comments.

Some interesting facts:

  • Load-time goes from 6 seconds to 2 seconds.
  • There are 105 network requests vs. 16.
  • There are a lot of non-relevant requests going through to networks that will be tracking your movements.

The author continues by listing some of these tracking networks when using the disqus platform:

  • disqus.com
  • google-analytics.com
  • connect.facebook.net
  • accounts.google.com
  • pippio.com
  • bluekai.com
  • crwdcntrl.net
  • exelator.com
  • doubleclick.net
  • tag.apxlv.net
  • adnxs.com
  • adsymptotic.com
  • rlcdn.com
  • adbrn.com
  • nexac.com
  • tapad.com
  • liadm.com
  • sohern.com
  • demdex.net
  • bidswitch.net
  • agkn.com
  • mathtag.com

And this is the important point:

Needless to say, it’s a pretty disgusting insight into how certain free products turn you into the product.

Read the whole article at http://donw.io/post/github-comments/

High Performance Browser Networking

19 May 2017 in Optimization, Performance

A book from Ilya Grigorik, web performance engineer at Google:

Performance is a feature. This book provides a hands-on overview of what every web developer needs to know about the various types of networks (WiFi, 3G/4G), transport protocols (UDP, TCP, and TLS), application protocols (HTTP/1.1, HTTP/2), and APIs available in the browser (XHR, WebSocket, WebRTC, and more) to deliver the best - fast, reliable and resilient - user experience.

Available to read online at hpbn.co

Covering index and MySQL

12 Mar 2017 in MySQL

In most cases, an index is used to quickly locate the data records from which the required data is read. Additional roundtrips to the database tables required to fetch the data.

A covering index is a type of index where the index itself contains all required data fields or, in other words, all fields selected in a query are covered by an index. This eliminates the additional roundtrips to the database tables, which is I/O bounded, thus improving performance. Note that in MySQL, this applies only to InnoDB tables.

Also beware that using many fields on an index will degrade the performance of some queries like INSERT, UPDATE and DELETE.

Mocking GuzzlePHP in phpunit

05 Mar 2017 in GuzzlePHP, phpunit

Suppose you have a class making some HTTP calls. Using the GuzzlePHP library instead of curl or file_get_contents has the benefit of using a simple interface for building HTTP requests:

    <?php
    class MyClass
    {
        private $client;
        private $params = [
            "url"     => "http://some-api-endpoint-url",
            "key"     => null,
            "timeout" => 30,
        ];

        public function __construct(GuzzleHttp\Client $client, array $params = [])
        {
            $this->client = $client;
            $this->params = array_merge($this->params, $params);
        }

        public function callTheApi()
        {
            $response = $this->client->post(
                "{$this->params["url"]}?key={$this->params["key"]}",
                [
                    'headers' => [
                        'Content-Type' => 'application/json',
                        'Accept'       => 'application/json'
                    ],
                    'timeout' => $this->params["timeout"],
                ]
            );
            return GuzzleHttp\json_decode(
                $response->getBody()->getContents()
            );
        }
    }

When unit testing a piece of code like this, instead of mocking the GuzzlePHP client via phpunit's mock methods, you can utilize the GuzzleHttp\Handler\MockHandler like below:

    <?php
    class MyClassTest extends PHPUnit_Framework_TestCase
    {
        private $sut; // system under test
        private $client;
        private $handler;

        function setup()
        {
            $this->handler = new GuzzleHttp\Handler\MockHandler();
            $this->client = new GuzzleHttp\Client([
                'handler' => GuzzleHttp\HandlerStack::create($this->handler)
            ]);
            $this->sut = new MyClass($this->client, $params = []);
        }

        function testCallTheApi()
        {
            $this->handler->append(new GuzzleHttp\Psr7\Response(200));
            $result = $this->sut->callTheApi();
            // assertions follow
        }
    }

Head over to http://docs.guzzlephp.org/en/latest/testing.html for the full documentation. And remember, always test your code.

Improving javascript load/parse times

12 Feb 2017 in Javascript, Optimization

Loading a webpage is much more than just downloading the page content. For the javascript part, the browser has to download, parse, interpret and then run the javascript... scripts.

In this post, Addy Osmani, staff engineer (is this a thing?) at Google, shows some notes on how javascript may slow down a web page and how you may speed things up.

Most notable quotes:

Historically, we just haven’t spent a lot of time optimizing for the JavaScript Parse/Compile step. We almost expect scripts to be immediately parsed and executed as soon as the parser hits a <script> tag. But this isn’t quite the case...

Parsing, Compiling and Executing scripts are things a JavaScript engine spends significant time in during start-up. This matters as if it takes a while, it can delay how soon users can interact with our site...

As we move to an increasingly mobile world, it’s important that we understand the time spent in Parse/Compile can often be 2–5x as long on phones as on desktop...

Script size is important, but it isn’t everything. Parse and Compile times don't necessarily increase linearly when the script size increases...

In order to improve parse times, the author suggest on the following:

  • Ship less JavaScript
  • Use code-splitting to only ship the code a user needs
  • Script streaming
  • Measure the parse cost of dependencies

So, next time, beware using two tons of javascript libraries to manipulate some DOM elements.

Delete emails from Postfix queue

30 Jan 2017 in Email, Postfix, Spam

On Postfix, to delete emails send by some domain (for example domain.tld), you can use:

mailq | awk '$7 ~ /@domain.tld$/ { print $1 }' | tr -d '*!' | postsuper -d -

To delete from a specific sender:

mailq | awk '$7 ~/^username@domain.tld$/ { print $1 }' | tr -d '*!' | postsuper -d -

Nb: To count all messages in queue use:

mailq | grep -c '^[0-9A-Z]'
# or
mailq | grep -c '^\w'

Useful when your wordpress installation starts sending spam emails.

Synchronize emails between mail servers

26 Jan 2017 in Email, IMAP

For this, I use imapsync an Email IMAP tool for syncing, copying and migrating email mailboxes. To set it up on an Ubuntu machine (shamelessly stolen from Joeri Verdeyen's blog here and here) use:

# install dependencies
sudo apt-get install makepasswd rcs perl-doc libio-tee-perl git \
    libmail-imapclient-perl libdigest-md5-file-perl \
    libterm-readkey-perl libfile-copy-recursive-perl \
    build-essential make automake libunicode-string-perl

# clone git repository
git clone git://github.com/imapsync/imapsync.git

# build/install
cd imapsync
mkdir dist
sudo make install

The make install above may tell you about missing perl dependencies. If so, call cpan with sudo and install the required dependencies like:

sudo cpan
# and from cpan> prompt
install Unicode::String # or whatever make install told you

Then to synchronize emails for a given email account, just do:

imapsync --host1 host1 --user1 user1 --password1 pass1 \
         --host2 host2 --user2 user2 --password2 pass2

A simple bash script to automate this procedure is:

#!/bin/bash
from=''
to=''
while getopts 'f:t:' flag; do
  case "${flag}" in
    f) from="${OPTARG}" ;;
    t) to="${OPTARG}" ;;
  esac
done
if [ -z $from ]; then
    echo "From host not specified (hint -f)! Exiting"
    exit 1
fi
if [ -z $to ]; then
    echo "To host not specified (hint -t)! Exiting"
    exit 1
fi
while read line; do
    user=$(echo $line | cut -f 1 -d ' ')
    pass=$(echo $line | cut -f 2 -d ' ')
    imapsync \
        --host1 $from \
        --user1 $user \
        --password1 $pass \
        --host2 $to \
        --user2 $user \
        --password2 $pass
done

Save it into a file called imap-copy.sh, make it executable and call it like this:

./imap-copy.sh -f host1 -t host2 <<< "user password"

Or you can create a file with user/password pairs (space delimited) and call it like:

./imap-copy.sh -f host1 -t host2 < email-password.list

Useful when moving a domain from one server to another.

Redirect to https via .htaccess

25 Jan 2017 in Apache

RewriteEngine on
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301]

The [L] flag causes mod_rewrite to stop processing the rule set. The [R] flag issues an HTTP redirect to the browser (301 stands for Permanent Redirect). This depends on mod_rewrite module.

For a complete reference head over to apache docs.