Wednesday, April 27, 2016

The 5 Most Common UI Design Mistakes

Although the title UI Designer suggests a sort of departure from the traditional graphic designer, UI design is still a part of the historical trajectory of the visual design discipline.
With each movement or medium, the discipline has introduced new graphic languages, layouts, and design processes. Between generations, the designer has straddled the transition from press to xerox, or paper to pixel. Across these generations, graphic design has carried out the responsibility of representing the visual language of each era respectively.
Therefore, as UI Design makes the transition out of its infancy, what sort of graphic world can we expect to develop? Unfortunately, based on the current trajectory, the future may look bleak. Much of UI Design today has become standardized and repeatable. Design discussions online involve learning the rules to get designs to safely work, rather than push the envelope, or imagine new things. The tendency for UI Designers to resort to patterns and trends has not only created a bland visual environment, but also diminished the value of the designer as processes become more and more formulaic. The issue is precisely not one of technicalities, but of impending visual boredom.
Thus, the Top Five Common UI Design mistakes are:
  • Following Design Rules
  • Abusing the Grid
  • Misunderstanding Typefaces
  • Patterns and the Standardization of UI Design
  • Finding Safety in Contrast
UI Design Rule Book
Understand principles and be creative within their properties. Following the rules will only take your where others have been.


The world of graphic design has always followed sets of rules and standards. Quite often in any design discipline, the common mistakes that are made can closely coincide with a standard rule that has been broken. Thus, from this perspective the design rules seem to be pretty trustworthy to follow.
However, in just about any design discipline, new design movements and creative innovation has generally resulted from consciously breaking said rule book. This is possible because design is really conditional, and requires the discretion of the designer, rather than a process with any sort of finite answers. Therefore, the design rules should likely be considered as guidelines more so rather than hard and fast rules. The experienced designer knows and respects the rule book just enough to be able break the box.
Unfortunately, the way that design is often discussed online is within sets of do’s and don’ts. Top mistakes and practices for design in 10 easy steps! Design isn’t so straightforward, and requires a much more robust understanding of principles and tendencies, rather than checklists to systematically carry out.
The concern is that if designers were to cease ‘breaking the rules’, then nothing new creatively would ever be made. If UI designers only develop their ability to follow guidelines, rather than make their own decisions, then they may quickly become irrelevant. How else will we argue a value greater than off the shelf templates?

Be Wary of Top Ten Design Rules

The issue with design rules in today’s UI design community is they are so abundant. In the interest of solving any problem, the designer can look to the existing UI community and their set of solutions, rather than solve an issue on their own. However, the abundance of these guides and rules have made themselves less credible.
A google search for “Top UI Design Mistakes” yields a half million search results. So, what are the chances that most, if any of these authors of various articles agree with one another? Or, will each design tip that is discussed coincide accurately with the design problems of a reader?
Often the educational articles online discuss acute problems, rather than the guiding design principles behind the issue. The result is that new designers will never learn why design works the way that it does. Instead, they only become able to copy what has come before. Isn’t it concerning that in none of these sorts of articles is something like play encouraged?
The designer should have a tool kit of principles to guide them, rather than a book of rules to follow predetermined designs. Press x for parallax scrolling and y for carousels. Before choosing, refer to most recent blog post on which navigational tool is trending. Boring!
Trends are like junk food for designers. Following trends produces cheap designs that may offer some initial pay back, but little worth in the long run. This means that not only may trendy designers become dated, or ineffective quickly. But, for you the designer, don’t expect to experience any sense of reward when designing in this way. Although working to invent your own styles and systems is a lot of work, it’s so worth it day in and day out. There’s just something about copying that never seems to feed the soul.


Despite my treatise against rules - here’s a rule: there is no way for a UI Designer to design without a grid. The web or mobile interface is fundamentally based on a pixel by pixel organization - there’s no way around it. However, this does not necessarily mean that the interface has to restrict designers to gridded appearances, or even gridded processes.

Using the Grid as a Trendy Tool

Generally, making any design moves as a response to trends can easily lead to poor design. Perhaps what results is a satisfactory, mostly functional product. But it will almost certainly be boring or uninteresting. To be trendy is to be commonplace. Therefore, when employing the grid in a design, understand what the grid has to offer as a tool, and what it might convey. Grids generally represent neutrality, as everything within the restraints of a grid appear equal. Grids also allow for a neutral navigational experience. Users can jump from item to item without any interference from the designer’s curatorial hand. Whereas, with other navigational structures, the designer may be able to group content, or establish desired sequences.
UI Design Rule Book
Although a useful tool, the grid can be very limiting to designers.

Defaulting to the Grid as a Work Flow

Dylan Fracareta, faculty of RISD and director of PIN-UP Magazine, points out that “most people start off with a 12 - column grid…because you can get 3 and 4 off of that”. The danger here is that immediately the designer predetermines anything that they might come up with. Alternatively, Fracareta resides to only using the move tool with set quantities, rather than physically placing things against a grid line. Although this establishes order, it opens up more potential for unexpected outcomes. Although designing for the browser used to mean that we would input some code, wait, and see what happens. Now, web design has returned to a more traditional form of layout designer that’s “more like adjusting two sheets of transparent paper”. How can we as designers benefit from this process? Working Without a Grid Although grids can be restricting, they are one of our most traditional forms of organization. The grid is intuitive. The grid is neutral and unassuming. Therefore, grids allow content to speak for itself, and for users to navigate at their will and with ease. Despite my warnings towards the restrictiveness of grids, different arrays allow for different levels of guidance or freedom.


The concept of standardized design elements predates UI design. Architectural details have been frequently repeated in practice for typical conditions for centuries. Generally this practice makes sense for certain parts of a building that are rarely perceived by a user. However, once architects began to standardize common elements like furniture dimensions, or handrails heights, people eventually expressed disinterest in the boring, beige physical environment that resulted. Not only this, but standardized dimensions were proven to be ineffective, as although generated as an average, they didn’t really apply to the majority of the population. Thus, although repeatable detail have their place, they should be used critically.
If we as designers choose to automate, what value are we providing?
If we as designers choose to automate, what value are we providing?

Designers Using the Pattern as Product

Many UI designers don’t view the pattern as a time saving tool, but rather an off the shelf solution to design problems. Patterns are intended to take recurring tasks or artefacts and standardize them in order to make the designer’s job easier. Instead, certain patterns like F Pattern Layouts, Carousels or Pagination have become the entire structure of many of our interfaces.

Justification for the Pattern is Skewed

Designers tell themselves that the F shaped pattern exists as a result of the way that people read on the web. Espen Brunborg points out that perhaps people read this way as a result of us designing for that pattern. “What’s the point of having web designers if all they do is follow the recipe,” Brunborg asks.


Many designer’s quick tips suggest hard and fast rules about fonts as well. Each rule is shouted religiously, “One font family only! Monospaced fonts are dead! Avoid thin fonts at all costs!”. But really, the only legitimate rules on type, text and fonts should be to enforce legibility, and convey meaning. As long as type is legible, there may very well be an appropriate opportunity for all sorts of typefaces. The UI Designer must take on the responsibility of knowing the history, uses, and designed intentions for each font that they implement in a UI.

Consider a Typeface Only for Legibility

Typefaces convey meaning as well as affect legibility. With all of the discussion surrounding rules for proper legibility on devices etc, designers are forgetting that type is designed to augment a body of text with a sensibility, as much as it is meant to be legible. Legibility is critical, I do not dispute this - but my point is that legibility really should be an obvious goal. Otherwise, why wouldn’t we have just stopped at Helvetica, or maybe Highway Gothic. However, the important thing to remember is that fonts are not just designed for different contexts of legibility. Typefaces are also essential for conveying meaning or giving a body of text a mood.
Typefaces are each designed for their own uses. Don't allow narrow minded rules to restrict an exploration of the world of type.

Avoiding Thin Fonts At All Costs

Now that the trend has come (and almost gone?), a common design criticism is to avoid thin fonts entirely. In the same way thin fonts came as a trend, they may leave as one also. However, the hope should be to understand the principles of the typefaces rather than follow trends at all.
Some say that they’re impossible to read or untrustworthy between devices. All legitimate points. Yet, this represents a condition in the current discussion of UI design. The font choice is only understood by designers as technical choice in regards to legibility, rather than also understanding the meaning and value of typefaces. The concern is that if legibility is the only concern that a designer carried, would thin fonts be done away with entirely?
Understand why you are using a thin font, and within what contexts. Bold, thick text is actually much more difficult to read at length than thinner fonts. Yet, as bold fonts carry more visual weight they’re more appropriate for headings, or content with little text. As thin fonts are often serifs, its suitability for body text is entirely objective. As serif characters flow together when read in rapid succession, they make for much more comfortable long reading.
As well, thin fonts are often chosen because they convey elegance. So, if a designer was working on an interface for a client whose mandate was to convey elegance, they might find themselves hard pressed to find a heavy typeface to do the job.

Not Enough Variation

A common mistake is to not provide enough variation between fonts in an interface. Changing fonts is a good navigational tool to establish visual hierarchy, or potentially different functions within an interface. A crash course on hierarchy will teach you that generally the largest items, or boldest fonts, should be the most important, and carry the most visual weight. Visual importance can convey content headings, or perhaps frequently used functions.

Too Much Variation

A common UI Design mistake is to load in several different typefaces from different families that each denote a unique function. The issue with making every font choice special, when there is many fonts, is that no font stands out. Changing fonts is a good navigational tool to establish visual hierarchy, or potentially different functions within an interface. Therefore, if every font is different, there is too much confusion for a user to recognize any order.


A common mistake that appears on many Top UI Design Mistake lists is that designers should avoid low contrast interfaces. There are many instances in which low contrast designs are illegible and ineffective - true. However, as with the previous points, my worry is that this use of language alternatively produces a high contrast design culture in response.

Defaulting to High Contrast

The issue is that high contrast is aesthetically easy to achieve. High contrast visuals are undeniably stimulating or exciting. However, there are many more moods in the human imagination to convey or communicate with, other than high stimulation. To be visually stimulating may also be visually safe.
The same issue is actually occurring in sci-fi film. The entire industry has resorted to black and neon blue visuals as a way to trick viewers into accepting ‘exciting’ visuals, instead of new, creative, or beautiful visuals. This article points out what the sci-fi industry is missing out on by producing safe visuals.
Functionally, if every element in an interface is in high contrast to another, then nothing stands out. This defeats the potential value of contrast as a hierarchical tool. Considering different design moves as tools, rather than rules to follow is essential in avoiding stagnant, trendy design.

Illegibly Low Contrast

The use of low contrast fonts and backgrounds is a commonly made mistake. However, rather than being a design issue. This could potentially be discussed as a beta testing mistake, rather than a design mistake.
How the design element relates as a low contrast piece to the rest of the interface is a design concern. The issue could be that the most significant item hierarchically is low in contrast to the rest of the interface. For the interface to communicate its organizational structure, the elements should contrast one another in a certain way. This is a design discussion. Whether or not it is legible is arguably a testing mistake.
The point is that in only discussing contrast as a technical issue resolvable by adjusting a value, designers miss out on the critical understanding of what contrast is principally used for.


As with the previous 4 mistakes, the abuse of patterns will rarely result in a dysfunctional website, but rather just a boring one. The mistake is in being safe. This overly cautious method of design may not cause the individual project to fail. However, this series of safe mistakes performed by the greater web community can mean greater failures beyond the individual UI design project. The role of the designer should be to imagine, thoughtfully experiment and create - not to responsibly follow rules and guidelines.
The original article is found on the Toptal Design Blog.

Monday, April 18, 2016

Top Ten Front-End Design Rules For Developers

As front-end developers, our job is, essentially, to turn designs into reality via code. Understanding, and being competent in, design is an important component of that. Unfortunately, truly understanding front-end design is easier said than done. Coding and aesthetic design require some pretty different skill sets. Because of that, some front-end devs aren’t as proficient in the design aspect as they should be, and as a result, their work suffers.
My goal is to give you some easy-to-follow rules and concepts, from one front-end dev to another, that will help you go from start to finish of a project without messing up what your designers worked so hard on (or possibly even allowing you to design your own projects with decent results).
Of course, these rules won’t take you from bad to magnificent in the time it takes to read one article, but if you apply them to your work, they should make a big difference.

Do Stuff In A Graphics Program

It’s truly rare that you complete a project, and go from start to finish while maintaining every single aesthetic mutation in the design files. And, unfortunately, designers aren’t always around to run to for a quick fix.
Therefore, there always comes a point in any front-end job where you end up having to make some aesthetic-related tweaks. Whether it’s making the checkmark that shows when you check the checkbox, or making a page layout that the PSD missed, front-enders often end up handling these seemingly minor tasks. Naturally, in a perfect world this wouldn’t be the case, but I have yet to find a perfect world, hence we need to be flexible.
A good front-end developer has to use professional graphics tools. Accept no substitute.
A good front-end developer has to use professional graphics tools. Accept no substitute.
For these situations, you should always use a graphics program for mockups. I don’t care which tool you choose: Photoshop, Illustrator, Fireworks, GIMP, whatever. Just don’t just attempt to design from your code. Spend a minute launching a real graphics program and figuring out how it should look, then go to the code and make it happen. You may not be an expert designer, but you’ll still end up with better results.

Match the Design, Don’t Try To Beat It

Your job is not to impress with how unique your checkmark is; your job is to match it to the rest of the design.
Those without a lot of design experience can easily be tempted to leave their mark on the project with seemingly minor details. Please leave that to the designers.
Developers have to match the original front-end design as closely as possible.
Developers have to match the original front-end design as closely as possible.
Instead of asking “Does my checkmark look amazing?” you should be asking, “How well does my checkmark match the design?”
Your focus should always be on working with the design, not on trying to outdo it.

Typography Makes All the Difference

You’d be surprised to know how much of the end look of a design is influenced by typography. You’d be just as surprised to learn how much time designers spend on it. This is not a “pick-it-and-go” endeavor, some serious time and effort goes into it.
If you end up in a situation where you actually have to choose typography, you should spend a decent amount of time doing so. Go online and research good font pairings. Spend a few hours trying those pairings and making sure you end up with the best typography for the project.
Is this font right for your project? When in doubt, consult a designer.
Is this font right for your project? When in doubt, consult a designer.
If you’re working with a design, then make sure you follow the designer’s typography choices. This doesn’t just mean choosing the font, either. Pay attention to the line spacing, letter spacing, and so on. Don’t overlook how important it is to match the typography of the design.
Also, make sure you use the right fonts in the correct spot. If the designer uses Georgia for headers only and Open Sans for body, then you shouldn’t be using Georgia for body and Open Sans for headers. Typography can make or break aesthetics easily. Spend enough time making sure you are matching your designer’s typography. It will be time well spent.

Front-end Design Doesn’t Tolerate Tunnel Vision

You’ll probably be making small parts of the overall design.
Tunnel vision is a common pitfall for front-end developers. Don’t focus on a single detail, always look at the big picture.
Tunnel vision is a common pitfall for front-end developers. Don’t focus on a single detail, always look at the big picture.
An example I’ve been going with is making the checkmark for a design that includes custom checkboxes, without showing them checked. It’s important to remember that the parts you are making are small parts of an overall design. Make your checks as important as a checkmark on a page should look, no more, no less. Don’t get tunnel vision about your one little part and make it something it shouldn’t be.
In fact, a good technique for doing this is to take a screenshot of the program so far, or of the design files, and design within it, in the context in which it will be used. That way, you really see how it affects other design elements on the page, and whether it fits its role properly.

Relationships And Hierarchy

Pay special attention to how the design works with hierarchy. How close are the titles to the body of text? How far are they from the text above them? How does the designer seem to be indicating which elements/titles/text bodies are related and which aren’t? They’ll commonly do these things by boxing related content together, using varying white space to indicate relationships, using similar or contrasting colors to indicate related/unrelated content, and so on.
A good front-end developer will respect design relationships and hierarchy. A great developer will understand them.
A good front-end developer will respect design relationships and hierarchy. A great developer will understand them.
It’s your job to make sure that you recognize the ways in which the design accomplishes relationships and hierarchy and to make sure those concepts are reflected in the end product (including for content that was not specifically designed, and/or dynamic content). This is another area (like typography) where it pays to take extra time to make sure you’re doing a good job.

Be Picky About Whitespace And Alignment

This is a great tip for improving your designs and/or better implementing the designs of others: If the design seems to be using spacings of 20 units, 40 units, etc., then make sure every spacing is a multiple of 20 units.
This is a really drop-dead simple way for someone with no eye for aesthetics to make a significant improvement quickly. Make sure your elements are aligned down to the pixel, and that the spacing around every edge of every element is as uniform as possible. Where you can’t do that (such as places where you need extra space to indicate hierarchy), make them exact multiples of the spacing you’re using elsewhere, for example two times your default to create some separation, three times to create more, and so on.
Do your best to understand how the designer used whitespace and follow those concepts in your front-end build.
Do your best to understand how the designer used whitespace and follow those concepts in your front-end build.
A lot of devs achieve this for specific content in the design files, but when it comes to adding/editing content, or implementing dynamic content, the spacing can go all over the place because they didn’t truly understand what they were implementing.
Do your best to understand how the designer used whitespace and follow those concepts in your build. And yes, spend time on this. Once you think your work is done, go back and measure the spacing to ensure you have aligned and uniformly spaced everything as much as possible, then try out the code with lots of varying content to make sure it’s flexible.

If You Don’t Know What You’re Doing, Do Less

I’m not one of those people that thinks every project should use minimalist design, but if you’re not confident in your design chops and you need to add something, then less is more.
Less is more. If your designer did a good job to begin with, you should refrain from injecting your own design ideas.
Less is more. If your designer did a good job to begin with, you should refrain from injecting your own design ideas.
The designer took care of the main stuff; you only need to do minor fillers. If you’re not very good at design, then a good bet is to do as minimal amount as you can to make that element work. That way, you’re injecting less of your own design into the designer’s work, and affecting it as little as possible.
Let the designer’s work take center stage and let your work take the back seat.

Time Makes Fools Of Us All

I’ll tell you a secret about designers: 90 percent (or more) of what they actually put down on paper, or a Photoshop canvas, isn’t that great.
They discard far more than you ever see. It often takes many revisions and fiddling with a design to get it to the point where they’d even let the guy in the next cubicle see their work, never mind the actual client. You usually don’t go from a blank canvas to good design in one step; there’s a bunch iterations in between. People rarely make good work until they understand that and allow for it in their process.
If you think the design can be improved upon, consult your designer. It’s possible they already tried a similar approach and decided against it.
If you think the design can be improved upon, consult your designer. It’s possible they already tried a similar approach and decided against it.
So how do you implement this? One important method is taking time between versions. Work until it looks like something you like then put it away. Give it a few hours (leaving it overnight is even better), then open it up again and take a look. You’ll be amazed at how different it looks with fresh eyes. You’ll quickly pick out areas for improvement. They’ll be so clear you’ll wonder how you possibly missed them in the first place.
In fact, one of the better designers I’ve known takes this idea a lot further. He would start by making three different designs. Then, he’d wait at least 24 hours, look at them again and throw them all out and start from scratch on a fourth. Next, he’d allow a day between each iteration as it got better and better. Only when he opened it up one morning, and was totally happy, or at least, as close as a designer ever gets to totally happy, would he send it to the client. This was the process he used for every design he made, and it served him very well.
I don’t expect you to take it that far, but it does highlight how helpful time without “eyes on the design” can be. It’s an integral part of the design process and can make improvements in leaps and bounds.

Pixels Matter

You should do everything in your power to match the original design in your finished program, down to the last pixel.
Front-end developers should try to match the original design down to the last pixel.
Front-end developers should try to match the original design down to the last pixel.
In some areas you can’t be perfect. For example, your control over letter-spacing might not be quite as precise as that of the designer’s, and a CSS shadow might not exactly match a Photoshop one, but you should still attempt to get as close as possible. For many aspects of the design, you really can get pixel-perfect precision. Doing so can make a big difference in the end result. A pixel off here and there doesn’t seem like much, but it adds up and affects the overall aesthetic much more than you’d think. So keep an eye on it.
There are a number of [tools] that help you compare original designs to end results, or you can just take screenshots and paste them into the design file to compare each element as closely as possible. Just lay the screenshot over the design and make it semi-transparent so that you can see the differences. Then you know how much adjustment you have to make to get it spot on.

Get Feedback

It’s hard to gain an “eye for design.” It’s even harder to do it on your own. You should seek the input of othersto really see how you can make improvements.
I am not suggesting you grab your neighbor and ask for advice, I mean you should consult real designers and let them critique your work and offer suggestions.
Let designers critique your work. Put their criticism to good use and don’t antagonize them.
Let designers critique your work. Put their criticism to good use and don’t antagonize them.
It takes some bravery to do so, but in the end it is one of the most powerful things you can do to improve the project in the short-term, and to improve your skill level in the long run.
Even if all you have to fine tune is a simple checkmark, there are plenty of people willing to help you. Whether it’s a designer friend, or an online forum, seek out qualified people and get their feedback.
Build a long-lasting, productive relationship with your designers. It’s vital for useful feedback, quality, and execution.
Build a long-lasting, productive relationship with your designers. It’s vital for useful feedback, quality, and execution.
It may sound time consuming, and may cause friction between you and your designers, but in the big scheme of things, it’s worth it. Good front-end developers rely on valuable input from designers, even when it’s not something they like to hear.
Therefore, it’s vital to build and maintain a constructive relationship with your designers. You’re all in the same boat, so to get the best possible results you have to collaborate and communicate every step of the way. The investment in building bonds with your designers is well worth it, as it will help everyone do a better job and execute everything on time.


To summarize, here is a short list of design tips for front-end developers:
  • Design in a graphics program. Don’t design from code, not even the small stuff.
  • Match the design. Be conscious of the original design and don’t try to improve it, just match it.
  • Typography is huge. The time you spend making sure it’s right should reflect its importance.
  • Avoid tunnel vision. Make sure your additions stand out only as much as they should. They’re not more important just because you designed them.
  • Relationships and hierarchy: Understand how they work in the design so that you can implement them properly.
  • Whitespace and alignment are important. Make them accurate to the pixel and make them evenly throughout anything you add.
  • If you’re not confident in your skills, then make your additions as minimally styled as you can.
  • Take time between revisions. Come back later to see your design work with fresh eyes.
  • Pixel-perfect implementation is important wherever possible.
  • Be brave. Seek out experienced designers to critique your work.
Not every front-end developer is going to be a fantastic designer, but every front-end dev should at least becompetent in terms of design.
You need to understand enough about design concepts to identify what’s going on, and to properly apply the design to your end product. Sometimes, you can get away with blind copying if you’ve got a thorough designer (and if you’re detail oriented enough to truly copy it pixel for pixel).
However, in order to make large projects shine across many variations of content, you need some understanding of what’s going through the designer’s head. You don’t merely need to see what the design looks like, you need to know why it looks the way it does, and that way you can be mindful of technical and aesthetic limitations that will affect your job.
So, even as a front-end developer, part of your regular self-improvement should always include learning more about design.
The original article was written by BRYAN GREZESZAK - FREELANCE SOFTWARE ENGINEER @ TOPTAL and can be read here.

Wednesday, April 13, 2016

HSA For Developers: Heterogeneous Computing For The Masses

What do chipmakers like AMD, ARM, Samsung, MediaTek, Qualcomm, and Texas Instruments have in common? Well, apart from the obvious similarities between these chip-making behemoths, they also happen to be founders of the HSA Foundation. What’s HSA, and why does it need a foundation backed by industry heavyweights?
In this post I will try to explain why HSA could be a big deal in the near future, so I’ll start with the basics: What is HSA and why should you care?
HSA stands for Heterogeneous System Architecture, which sounds kind of boring, but trust me, it could become very exciting, indeed. HSA is essentially a set of standards and specifications designed to allow further integration of CPUs and GPUs on the same bus. This is not an entirely new concept; desktop CPUs and mobile SoCs have been employing integrated graphics and using a single bus for years, but HSA takes it to the next level.
Same load, different architectures: CPUs and GPUs excel at different tasks. What happens when they start sharing the load, with no developer input?
Same load, different architectures: CPUs and GPUs excel at different tasks. What happens when they start sharing the load, with no developer input?
Rather than simply using the same bus and shared memory for the CPU and GPU, HSA also allows these two vastly different architectures to work in tandem and share tasks. It might not sound like a big deal, but if you take a closer look, and examine the potential long-term effects of this approach, it starts to look very “sweet” in a technical sense.

Oh No! Here’s Another Silly Standard Developers Have To Implement

Yes and no.
The idea of sharing the same bus is not new, and neither is the idea of employing highly parallelised GPUs for certain compute tasks (which don’t involve rendering headshots). It’s been done before, and I guess most of our readers are already familiar with GPGPU standards like CUDA and OpenCL.
However, unlike the CUDA or OpenCL approach, HSA would effectively take the developer out of the equation, at least when it comes to assigning different loads to different processing cores. The hardware would decide when to offload calculations from the CPU to the GPU and vice versa. HSA is not supposed to replace established GPGPU programming languages like OpenCL, as they can be implemented on HSA hardware as well.
That’s the whole point of HSA: It’s supposed to make the whole process easy, even seamless. Developers won’t necessarily have to think about offloading calculations to the GPU. The hardware will do it automatically.
A lot of big names support HSA. However, industry heavyweights Intel and Nvidia are not on the list.
A lot of big names support HSA. However, industry heavyweights Intel and Nvidia are not on the list.
To accomplish this, HSA will have to enjoy support from multiple chipmakers and hardware vendors. While the list of HSA supporters is impressive, Intel is conspicuously absent from this veritable who’s who of the chip industry. Given Intel’s market share in both desktop and server processor markets, this is a big deal. Another name you won’t find on the list is Nvidia, which is focused on CUDA, and is currently the GPU compute market leader.
However, HSA is not designed solely for high performance systems and applications, on hardware that usually sports an Intel Inside sticker. HSA can also be used in energy efficient mobile devices, where Intel has a negligible market share.
So, HSA is supposed to make life easier, but is it relevant yet? Will it catch on? This is not technological question, but an economic one. It will depend on the invisible hand of the market. So, before we proceed, let’s start by taking a closer look at where things stand right now, and how we got here.

HSA Development, Teething Problems And Adoption Concerns

As I said in the introduction, HSA is not exactly a novel concept. It was originally envisioned by Advanced Micro Devices (AMD), which had a vested interest in getting it off the ground. A decade ago, AMD bought graphics specialists ATI, and since then the company has been trying to leverage its access to cutting edge GPU technology to boost overall sales.
On the face of it, the idea was simple enough: AMD would not only continue developing and manufacturing cutting-edge discrete GPUs, it would also integrate ATI’s GPU technology in its processors. AMD’s marketing department called the idea ‘Fusion’, and HSA was referred to as Fusion System Architecture (FSA). Sounds great, right? Getting a decent x86 processor with good integrated graphics sounded like a good idea, and it was.
Unfortunately, AMD ran into a number of issues along the way; I’ll single out a few of them:
  • Any good idea in tech is bound to be picked up by competitors, in this case – Intel.
  • AMD lost the technological edge to Intel and found it increasingly difficult to compete in the CPU market due to Intel’s foundry technology lead.
  • AMD’s execution was problematic and many of the new processors were late to market. Others were scrapped entirely.
  • The economic meltdown of 2008 and subsequent mobile revolution did not help.
These, and a number of other factors, conspired to blunt AMD’s edge and prevent market adoption of its products and technologies. AMD started rolling out processors with the new generation of integrated Radeon graphics in mid-2011, and it started calling them Accelerated Processing Units (APUs) instead of CPUs.
Marketing aside, AMD’s first generation of APUs (codenamed Llano), was a flop. The chips were late and could not keep up with Intel’s offerings. Serious HSA features were not included either, but AMD started adding them in its 2012 platform (Trinity, which was essentially Llano done right). The next step came in 2014, with the introduction of Kaveri APUs, which supported heterogeneous memory management (the GPU IOMMU and CPU MMU shared the same address space). Kaveri also brought about more architectural integration, enabling coherent memory between the CPU and GPU (AMD calls it hUMA, which stands for Heterogeneous Unified Memory Access) . The subsequent Carizzo refresh added even more HSA features, enabling the processor to context switch compute tasks on the GPU and do a few more tricks.
The upcoming Zen CPU architecture, and the APUs built on top of it, promises to deliver even more, if and when it shows up on the market.
So what’s the problem?
AMD was not the only chipmaker to realise the potential of on-die GPUs. Intel started adding them to its Core CPUs as well, as did ARM chipmakers, so integrated GPUs are currently used in virtually every smartphone SoC, plus the vast majority of PCs/Macs. In the meantime, AMD’s position in the CPU market was eroded. The market share slump made AMD’s platforms less appealing to developers, businesses, and even consumers. There simply aren’t that many AMD-based PCs on the market, and Apple does not use AMD processors at all (although it did use AMD graphics, mainly due to OpenCL compatibility).
AMD no longer competes with Intel in the high-end CPU market, but even if it did, it wouldn’t make much of a difference in this respect. People don’t buy $2,000 workstations or gaming PCs to use integrated graphics. They use pricey, discrete graphics, and don’t care much about energy efficiency.

How About Some HSA For Smartphones And Tablets?

But, wait. What about mobile platforms? Couldn’t AMD just roll out similar solutions for smartphone and tablet chips? Well, no, not really.
You see, a few years after the ATI acquisition, AMD found itself in a tough financial situation, compounded by the economic crisis, so it decided to sell off its Imageon mobile GPU division to Qualcomm. Qualcomm renamed the products Adreno (anagram of Radeon), and went on to become the dominant player in the smartphone processor market, using freshly repainted in-house GPUs.
As some of you may notice, selling a smartphone graphics outfit just as the smartphone revolution was about to kick off, does not look like a brilliant business move, but I guess hindsight is always 20/20.
HSA used to be associated solely with AMD and its x86 processors, but this is no longer the case. In fact, if all HSA Foundation members started shipping HSA-enabled ARM smartphone processors, they would outsell AMD’s x86 processors several fold, both in terms of revenue and units shipped. So what happens if they do? What would that mean for the industry and developers?
Well, for starters, smartphone processors already rely on heterogeneous computing, sort of. Heterogeneous computing usually refers to the concept of using different architectures in a single chip, and considering all the components found on today’s highly integrated SoCs, this could be a very broad definition. As a result, nearly every SoC may be considered a heterogeneous computing platform, depending on one’s standards. Sometimes, people even refer to different processors based on the same instruction set as a heterogeneous platform (for example, mobile chips with ARM Cortex-A57 and A53 cores, both of which are based on the 64-bit ARMv8 instruction set).
Many observers agree that most ARM-based processors may now be considered heterogeneous platforms, including Apple A-series chips, Samsung Exynos SoCs and similar processors from other vendors, namely big players like Qualcomm and MediaTek.
But why would anyone need HSA on smartphone processors? Isn’t the whole point of using GPUs for general computing to deal with professional workloads, not Angry Birds and Uber?
Yes it is, but that does not mean that a nearly identical approach can’t be used to boost efficiency, which is a priority in mobile processor design. So, instead of crunching countless parallelized tasks on a high-end workstation, HSA could also be used to make mobile processors more efficient and versatile.
Few people take a close look at these processors, they usually check the spec sheet when they’re buying a new phone and that’s it: They look at the numbers and brands. They usually don’t look at the SoC die itself, which tells us a lot, and here is why: GPUs on high-end smartphone processors take up more silicon real estate than CPUs. Considering they’re already there, it would be nice to put them to good use in applications other than gaming, wouldn’t it?
A hypothetical, fully HSA-compliant smartphone processor could allow developers to tap this potential without adding much to the overall production costs, implement more features, and boost efficiency.
Here is what HSA could do for smartphone processors, in theory at least:
  • Improve efficiency by transferring suitable tasks to the GPU.
  • Boost performance by offloading the CPU in some situations.
  • Utilize the memory bus more effectively.
  • Potentially reduce chip manufacturing costs by tapping more silicon at once.
  • Introduce new features that could not be handled by the CPU cores in an efficient way.
  • Streamline development by virtue of standardisation.
Sounds nice, especially when you consider developers are unlikely to waste a lot of time on implementation. That’s the theory, but we will have to wait to see it in action, and that may take a while.

How Does HSA Work Anyway?

I already outlined the basics in the introduction, and I am hesitant to go into too much detail for a couple of reasons: Nobody likes novellas published on a tech blog, and HSA implementations can differ.
Therefore, I will try to outline the concept in a few hundred words.
On a standard system, an application would offload calculations GPU by transferring the buffers to the GPU, which would involve a CPU call prior to queuing. The CPU would then schedule the job and pass it to the GPU, which would pass it back to the CPU upon completion. Then the application would get the buffer, which would again have to be mapped by the CPU before it is ready. As you can see, this approach involves a lot of back-and-forths.
Different architectures on one memory bus. Streamlining is the gist of HSA.
Different architectures on one memory bus. Streamlining is the gist of HSA.
On an HSA system, the application would queue the job, the HSA CPU would take over, hand it off to the GPU, get it back, and get it to the application. Done.
This is made possible by sharing system memory directly between the CPU and GPU, although other computing units could be involved too (DSPs for example). To accomplish this level of memory integration, HSA employs a virtual address space for compute devices. This means CPU and GPU cores can access the memory on equal terms, as long as they share page tables, allowing different devices to exchange data through pointers.
This is obviously great for efficiency, because it is no longer necessary to allocate memory to the GPU and CPU using virtual memory for each. Thanks to unified virtual memory, both of them can access the system memory according to their needs, ensuring superior resource utilization and more flexibility.
Imagine a low-power system with 4GB of RAM, 512MB of which is allocated for the integrated GPU. This model is usually not flexible, and you can’t change the amount of GPU memory on the fly. You’re stuck with 256MB or 512MB, and that’s it. With HSA, you can do whatever the hell you want: If you offload a lot of stuff to the GPU, and need more RAM for the GPU, the system can allocate it. So, in graphics-bound applications, with a lot of hi-res assets, the system could end up allocating 1GB or more RAM to the GPU, seamlessly.
All things being equal, HSA and non-HSA systems will share the same memory bandwidth, have access to thesame amount of memory, but the HSA system could end up using it much more efficiently, thus improving performance and reducing power consumption. It’s all about getting more for less.

What Would Heterogeneous Computing Be Good For?

The simple answer? Heterogeneous computing, or HSA as one if its implementations, should be a good choice for all compute tasks better suited to GPUs than CPUs. But what does that exactly mean, what are GPUs good at anyway?
Modern, integrated GPUs aren’t very powerful compared to discrete graphics (especially high-end gaming graphics cards and workstation solutions), but they are vastly more powerful than their predecessors.
If you haven’t been keeping track, you might assume that these integrated GPUs are a joke, and for years they were just that: graphics for cheap home and office boxes. However, this started changing at the turn of the decade as integrated GPUs moved from the chipset into the CPU package and die, becoming truly integrated.
This is what an AMD processor die looks nowadays. We still call them processors, but the GPU takes up substantially more silicon real estate than the CPU.
This is what an AMD processor die looks nowadays. We still call them processors, but the GPU takes up substantially more silicon real estate than the CPU.
While still woefully underpowered compared to flagship GPUs, even integrated GPUs pack a lot of potential. Like all GPUs, they excel at single instruction, multiple data (SIMD) and single instruction, multiple threads (SIMT) loads. If you need to crunch a lot of numbers in repetitive, parallelised loads, GPUs should help. CPUs, on the other hand, are still better at heavy, branched workloads.
That’s why CPUs have fewer cores, usually between two and eight, and the cores are optimized for sequential serial processing. GPUs tend to have dozens, hundreds, and in flagship discrete graphics cards, thousands of smaller, more efficient cores. GPU cores are designed to handle multiple tasks simultaneously, but these individual tasks are much simpler than those handled by the CPU. Why burden the CPU with such loads, if the GPU can handle them with superior efficiency and/or performance?
But if GPUs are so bloody good at it, why didn’t we start using them as general computing devices years ago? Well, the industry tried, but progress was slow and limited to certain niches. The concept was originally calledGeneral Purpose Computing on Graphics Processing Units (GPGPU). In the old days, potential was limited, but the GPGPU concept was sound and was subsequently embraced and standardized in the form of Nvidia’s CUDA and Apple’s/Khronos Group’s OpenCL.
CUDA and OpenCL made a huge difference since they allowed programmers to use GPUs in a different, and much more effective, way. They were, however, vendor-specific. You could use CUDA on Nvidia hardware, while OpenCL was reserved for ATI hardware (and was embraced by Apple). Microsoft’s DirectCompute API was released with DirectX 11, and allowed for a limited, vendor agnostic approach (but was limited to Windows).
Let’s sum up by listing a few applications for GPU computing:
  • Traditional high-performance computing (HPC) in the form of HPC clusters, supercomputers, GPU clusters for compute loads, GRID computing, load-balancing.
  • Loads that require physics, which can, but don’t have to, involve gaming or graphics in general. They can also be used to handle fluid dynamics calculations, statistical physics, and a few exotic equations and algorithms.
  • Geometry, almost everything related to geometry, including transparency computations, shadows, collision detection and so on.
  • Audio processing, using a GPU in lieu of DSPs, speech processing, analogue signal processing and more.
  • Digital image processing, is what GPUs are designed for (obviously), so they can be used to accelerate image and video post processing and decoding. If you need to decode a video stream and apply a filter, even an entry-level GPU will wipe the floor with a CPU.
  • Scientific computing, including climate research, astrophysics, quantum mechanics, molecular modelling, and so on.
  • Other computationally intensive tasks, namely encryption/decryption. Whether you need to “mine” cryptocurrencies, encrypt or decrypt your confidential data, crack passwords or detect viruses, the GPU can help.
This is not a complete list of potential GPU compute applications, but readers unfamiliar with the concept should get a general idea of what makes GPU compute different. I also left out obvious applications, such as gaming and professional graphics.
A comprehensive list does not exist, anyway, because GPU compute can be used for all sorts of stuff, ranging from finance and medical imaging, to database and statistics loads. You’re limited by your own imagination. So-called computer vision is another up and coming application. A capable GPU is a good thing to have if you need to “teach” a drone or driverless car to avoid trees, pedestrians, and other vehicles.
Feel free to insert your favourite Lindsay Lohan joke here.

Developing For HSA: Time For Some Bad News

This may be my personal opinion rather than fact, but I am an HSA believer. I think the concept has a lot of potential, provided it is implemented properly and gains enough support among chipmakers and developers. However, progress has been painfully slow, or maybe that’s just my feeling, with a pinch of wishful thinking. I just like to see new tech in action, and I’m anything but a patient individual.
The trouble with HSA is that it’s not there, yet. That does not mean it won’t take off, but it might take a while. After all, we are not just talking about new software stacks; HSA requires new hardware to do its magic. The problem with this is that much of this hardware is still on the drawing board, but we’re getting there. Slowly.
Unfortunately, the HSA solution stack includes more than the standard suite of software tools. Heterogeneous computing is a symbiosis of software and hardware.
Unfortunately, the HSA solution stack includes more than the standard suite of software tools. Heterogeneous computing is a symbiosis of software and hardware.
This does not mean developers aren’t working on HSA-related projects, but there’s not a lot of interest, or progress, for that matter. Here are a few resources you should check out if you want to give HSA a go:
  • HSA Foundation @ GitHub is, obviously, the place for HSA-related resources. The HSA Foundation publishes and maintains a number of projects on GitHub, including debuggers, compilers, vital HSAIL tools, and much more. Most resources are designed for AMD hardware.
  • HSAIL resources provided by AMD allows you to get a better idea of the HSAIL spec. HSAIL stands for HSA Intermediate Language, and it’s basically the key tool for back-end compiler writers and library writers who want to target HSA devices.
  • HSA Programmer’s Reference Manual (PDF) includes the complete HSAIL spec, plus a comprehensive explanation of the intermediate language.
  • HSA Foundation resources are limited for the time being and the foundation’s Developers Program is “coming soon,” but there are a number of official developer tools to check out. More importantly, they will give you a good idea of the stack you’ll need to get started.
  • The official AMD Blog features some useful HSA content as well.
This should be enough to get you started, provided you are the curious type. The real question is whether or not you should bother to begin with.

The Future Of HSA And GPU Computing

Whenever we cover an emerging technology, we are confronted with the same dilemma: Should we tell readers to spend time and resources on it, or to keep away, taking the wait and see approach?
I have already made it clear that I am somewhat biased because I like the general concept of GPU computing, but most developers can do without it, for now. Even if it takes off, HSA will have limited appeal and won’t concern most developers. However, it could be important down the road. Unfortunately for AMD, it’s unlikely to be a game-changer in the x86 processor market, but it could prove more important in ARM-based mobile processors. It may have been AMD’s idea, but companies such as Qualcomm and MediaTek are better positioned to bring HSA-enabled hardware to hundreds of millions of users.
It has to be a perfect symbiosis of software and hardware. If mobile chipmakers go crazy over HSA, it would be a big deal. A new generation of HSA chips would blur the line between CPU and GPU cores. They would share the same memory bus on equal terms, and I think companies will start marketing them differently. For example, AMD is already marketing its APUs as “compute devices” comprised of different “compute cores” (CPUs and GPUs).
Mobile chips could end up using a similar approach. Instead of marketing a chip with eight or ten CPU cores, and such and such GPU, chipmakers could start talking about clusters, modules and units. So, a processor with four small and four big CPU cores would be a “dual-cluster” or “dual-module” processor, or a “tri-cluster” or “quad-cluster” design, if they take into account GPU cores. A lot of tech specs tend to become meaningless over time, for example, the DPI on your office printer, or megapixel count on your cheap smartphone camera.
HSA enables different architectures to pull their own weight and tackle wildly different loads with greater efficiency.
HSA enables different architectures to pull their own weight and tackle wildly different loads with greater efficiency.
It’s not just marketing though. If GPUs become as flexible as CPU cores, and capable of accessing system resources on equal terms as the CPU, why should we even bother calling them by their real name? Two decades ago, the industry stopped using dedicated mathematical coprocessors (FPUs) when they became a must-have component of every CPU. Just a couple of product cycles later, we forgot they ever existed.
Keep in mind that HSA is not the only way of tapping GPUs for computation.
Intel and Nvidia are not on board, and their approach is different. Intel has quietly ramped up GPU R&D investment in recent years, and its latest integrated graphics solutions are quite good. As on-die GPUs become more powerful and take up more silicon real estate, Intel will have to find more ingenious ways of using them for general computing.
Nvidia, on the other hand, pulled out of the integrated graphics market years ago (when it stopped producing PC chipsets), but it did try its luck in the ARM processor market with its Tegra-series processors. They weren’t a huge success, but they’re still used in some hardware, and Nvidia is focusing its efforts on embedded systems, namely automotive. In this setting, the integrated GPU pulls its own weight since it can be used for collision detection, indoor navigation, 3D mapping, and so on. Remember Google’s Project Tango? Some of the hardware was based on Tegra chips, allowing for depth sensing and a few other neat tricks. On the opposite side of the spectrum, Nvidia’s Tesla product line covers the high-end GPU compute market, and ensures Nvidia’s dominance in this niche for years to come.
Bottom line? On paper, GPU computing is a great concept with loads of potential, but the current state of technology leaves much to be desired. HSA should go a long way towards addressing most of these problems. What’s more, it’s not supported by all industry players, which is bound to slow adoption further.
It may take a few years, but I am confident GPUs will eventually rise to take their rightful place in the general computing arena, even in mobile chips. The technology is almost ready, and economics will do the rest. How? Well, here’s a simple example. Intel’s current generation Atom processors feature 12 to 16 GPU Execution Units (EUs), while their predecessors had just four EUs, based on an older architecture. As integrated GPUs become bigger and more powerful, and as their die area increases, chipmakers will have no choice but to use them to improve overall performance and efficiency. Failing to do so would be bad for margins and shareholders.
Don’t worry, you’ll still be able to enjoy the occasional game on this new breed of GPU. However, even when you’re not gaming, the GPU will do a lot of stuff in the background, offloading the CPU to boost performance and efficiency.
I think we can all agree this would be a huge deal, especially on inexpensive mobile devices.
The original article was written by NERMIN HAJDARBEGOVIC - TECHNICAL EDITOR @ TOPTAL and can be read here.
If you’d like to find more resources on Toptal designers or hire a Toptal designer, check this out.