Archive for the ‘Web’ Category

px vs em vs rem vs …

October 21, 2019 Leave a comment

In this article I will try to summarize the pros and cons of using px vs em vs rem.
First of all let’s clarify some definitions.

A pixel is the atomic division of the screen (or media support) and it consists of red, green and blue sub-pixels. But this is not a CSS pixel, this is a hardware pixel. A px(pixel) or CSS pixel is actually a reference pixel and it’s a relation between visual angle and distance and device pixel density. I won’t go into details – you can’t find them here. The idea is that a CSS pixel is not always a pixel on a screen, but more like an optical illusion of an atomic division.
A pt(point) is a measurement unit traditionally used in print media (anything that is to be printed on paper, etc.) and equals to 1/72 of an inch.
An em is a measurement unit relative to the font size of the element it refers too. It’s not known what exactly em stands for, but it is believed to come from the letter M (spelled em) of which size usually accommodates all letters in a font.
A rem is a measurement unit relative to the font size of the root element. It is very much alike em, even the name says it so – root em, but it has a different point of reference.
A %(percent) is a measurement unit relative to the font size of the element it refers too.

px and pt are absolute unit lengths and em, rem and % are relative unit lengths. What’s the biggest difference between relative and absolute? Relative is scalable according to their reference point (current element or root element) as opposed to absolute which does not scale at all.

Then what should we use? What’s the “perfect” measurement unit? Well, like in the real world, there’s no perfect. So, you have to consider a few things before making a decision. And remember that you can use these units for everything: margins, width and height of elements, image sizes, paddings, media queries etc.

Pixel perfect design versus support for a plethora of devices

If you design a printed report (yes, it’s possible with HTML :)) and you want to look beautifully on paper, then use pt. That one is a typographical measurement and it will be the most appropriate one.

If you want to design a static page that will render beautifully (I mean pixel perfect) on one device only, then use px. But then it will render on one device only and far from beautifully on other devices. It could still be useful if you want to design for example a beautiful static ad to be used in desktop browsers.

Why? First of all, because of what I said earlier, pixels are not actually atomic subdivision of a screen, but rather optical reference units.
Can I patch this somehow? Kind of. Nowadays, browsers have a zoom feature, but this approach has two downsides. You have to rely on user to zoom in-out your page. You can also set programatically a zoom (with CSS zoom or transform/scale), but you have to know the exact value for each device. And this will not mean that your page will render beautifully, but rather that it will fit. And then it still could be too small (if you designed it on a larger device) or too large to be visible.

When should you use relative measurements, like em, rem or %? When you want to create web applications (which consists of dynamic pages) with support on different devices. But there’s a downside here too – you could lose some of that pixel perfect look. There is an old concept here, back from the days of Netscape Navigator, liquid layout, which I think it still can suit most needs. The idea is to develop a layout that can easily fit in different screen resolutions, mostly accomplished by using relative positioning and relative sizes instead of absolute ones. I wrote about this in a previous post.

Let me clarify with an example. Let’s suppose we have a paragraph with a padding of one character. Supposing you have a font size of 10px, this will be translated in pixels like this:

p {
padding: 10px;

or with em/rem like this

p {
padding: 1em;

But if the font size changes to 24px, the second case will scale as designed as opposed to the first case when the padding will become just too small (less than half of a character)

Browser text size and accessibility

Web Content Accessibility Guidelines defines the success criteria for making a web page accessible. I would like to refer here especially to SC 1.4.4 which refers to the capability of a web page to be readable (I would say viewable) on different text sizes. The recommended techniques for doing so are to use percent, em or named font sizes. Please keep in mind that these techniques are just recommendation, they’re not necessary.

To be more clear I will turn to a common browser feature – setting text size. I see this kind of feature especially useful on e-readers, but also on mobiles. And now take the example above with the padding. If a user sets the text font to a larger size, that padding will become just too subtle to distinguish beteween paragraphs. Or just imagine an emoji image, that will become less than half of the text size and screw up the entire alignment. I know, I know you could still use icon fonts and special characters for emojis, but I was just making a point about an inline image.

If web accessibility is not a concern for your users, then px is as good.

Floating point precision

All these CSS units, either relative or absolute, are capable of handling floats. Yes, 0.5px could actually make sense. Remember that a pixel is not a hardware pixel, but an illusion and it could actually represent 1 or 2 hardware pixels on a device with DPR of 2 or 4. Here the winner are relative units like em, rem or % because they provide better support for floating point precision.


It’s clearly, even from the example above, why CSS based on relative unit measurements are easier to maintain than the ones based on absolute ones. If you change in one place, everything else will scale automatically. If you use absolute units you have to change everywhere.

Nowadays, SASS(SCSS) has become a de-facto standard for developing CSS, especially in web applications. And if you use semantic CSS sizes, which I would recommend it anyway (I already gave you this blog post) this could be solved.

How does this translate in code? Instead of using padding: 16px just define a SCSS variable $medium: 16px and use that one padding: $medium Every time you change the $medium value, everything will be updated. Keep in mind that too many sizes or size with names not semantically chosen will just clutter your code and increase maintainability in time.

Remember that using Sass does not solve any of the issues above.

Media queries – are they special?

What should you use for CSS media queries? To help you make a decision I will translate this into plain English. Let’s say you want to make a decision between screen sizes, how would you like to say: “I want option A if the screen can fit 80 characters and option B for more” or “I want option A if the screen can fit 600 optical illusions of a pixel or option B for more”? As you guessed it first option is for relative(em or rem) and second is for absolute(px).

Relative, but root or contextual?

Now, if you decided for relative units, what should you use em or rem?

Well, again, your decision. But I will try to simplify it for you with few examples. Let’s say that you have a reference to a footnote, implemented by the sup tag. How do you want this to be rendered? Always with the same size or relative to the neighboring text? Let’s suppose that the heading has a double font size that the root one (h1 { font-size: 2rem; }). Do you want the footnote reference to be doubled too or you just want it as the same size as in paragraph? If you go for the former, use em (sup { font-size: 0.6em; }) or use rem (sup { font-size: 0.6rem; }) for the latter.

Now taking the heading in the example above. If you place it in the footer, do you want it the same size as the one in the content, or smaller according to the footer font size? If you want it smaller, then drop the rem as in the above example and use em instead.

Let’s consider a more complex example, some extra information paragraph implemented with the aside paragraph. Do you want this to render the same all over your page or to scale, let’s say scale down if you include it in the footer? The answer is simple – rem for first option, em for second.

It’s also an option to combine these two.

Fortunately, even though it is a more recent unit, rem has support in all modern browsers. So this is not an issue.


There’s no perfect measurement unit to be used in your pages and you’ll see lot of examples for both. There are a lot of advocates for the relative ones, just because responsive web and support for different devices and browsers has become a priority nowadays. I believe it’s essential to understand the differences, pros and cons between them and then make a decision. Also choosing a combination of these is also a viable option.

Categories: Web

A new project with TypeScript and Angular

July 2, 2018 Leave a comment

More than a year ago, I started a new adventure in a new startup company. New company, new adventure and a new project. New technology maybe?
Of course the risk of adopting a new technology in a new project is lower than migrating to a new technology in an existing one, but there still is a risk. Especially if the technology is young and almost no one in the team has experience with it.

I’m working in web projects for almost twenty years and with JavaScript for all this period. It is said that JavaScript is the least understood language. And even though you understand it you need a very high level of discipline in designing your application and writing you code if you want to keep away from spaghetti code. One of the biggest issues with JavaScript in my view is that it’s not a strong type language. In the past in my code I even tried to bring classes in JavaScript. But this solves the problem only partially.

You can understand my enthusiasm when I saw TypeScript. A strong typed language for the web. Yoohoo! And an entire framework built on top – Angular. Angular, not AngularJS. I worked with both frameworks, but basically what they have in common is the name. Angular is also known as the next version of AngularJS, or Angular 2, 4, 5, 6 …

Now coming back to the project. I proposed for it as the development language/framework the new TypeScript/Angular. At that moment it seemed like a big risk: no one in the team was used it before and even myself I have used in only couple of projects, none of which made it into production. But now, in retrospective, I believe it was one of the best decisions when it comes to technology selection for a new project.

I would not insist too much about TypeScript and Angular, but I still would like to point out a few advantages that I really like to make my case.


TypeScript it’s a strong typed language for the web with a lot of similarities with JavaScript. It’s not an interpreted language, but a hybrid one that compiles to JavaScript. This way you’ll catch a lot of errors right in the development phase, even better they’ll be flagged by your favorite IDE/editor.

Looking into the future, I think new projects and libraries should be written in TypeScript, even the ones targeting JavaScript. TypeScript is interoperable with JavaScript, the code compiles to JavaScript and the library is augmented also with type information for TypeScript users. The compiled script is optimized, obfuscated and easy to integrate. JavaScript acts like some kind of assembler code in this case.

A lot of the TypeScript improvements came to JavaScript through the latest ECMAScript standards, but not all are widely supported. There are also initiatives for native TypeScript support directly in the browser. But I would still see quite a few advantages from the ones outlined above still standing in a hybrid approach (compiled + interpreted).

In conclusion, I believe TypeScript is the modern and the best choice when it comes to programming languages for the web. It’s so cool, that sometimes I cannot believe it was made by Microsoft. Of course, it was a joint effort and maybe this approach will make them think about their future in a more and more open community.


Angular is the perfect companion as a framework for TypeScript. Its component-ized approach could seem an overkill in the beginning, but in an enterprise project you’ll quickly see its value. Components can be easily isolated and reused. It’s so easy to develop such a component, that sometimes could be easier to develop your own from scratch than customize an existing 3rd party one. Of course, this should be the exception, rather than the rule :).

As I said earlier AngularJS and Angular have basically only the name in common. Because of that it’s pretty hard to upgrade from the former to the latter. Quite the opposite is to upgrade between different versions of Angular as they maintain a high level of backward compatibility and features are deprecated progressibely. Usually it took me just a few hours to upgrade from Angular 2 to 4, from 4 to 5, from 5 too 6. The fact that TypeScript is strong typed, the compiler, or even better the IDE, points out the errors, making it extremely easy and straightforward.

Of course, a homogeneous product is the ideal case, but those are so rare … We had to integrate our project with an existing one built on AngularJS. It was like a case study – how to upgrade and interoperate between Angular’s. Angular came with a nice rescue solution here and with a decent effort we came up with a clean way of doing this. I will not enter into details here, but the nicest part in that, which definitely gained my vote, was that you could actually upgrade module by module, or even component by component. And the effort finally paid off when we started to reuse parts of the new project into the old one.

If you want to start a new project, Angular is a very well equipped framework that comes out of the box with TypeScript linter and compiler, webpack, SCSS support, unit and automation testing, polyfills etc. AngularJS did not have an official scaffolding tool, but Angular has Angular CLI which does a nice job.

TypeScript and Angular offered us a development landscape with emphasize on ease of development, less errors and lots of reusing opportunities. I think it was the best foundation on top of which we could build a modular toolkit, based on atomic design principles. We also managed to create a continuous build system, where the code was lint-checked and compiled for different environments, catching a lot of issues, right from that phase, a much harder or even impossible endeavor with JavaScript and other frameworks. We also integrated unit and automation tests and we’re working on extending the coverage of these. This will give us the confidence to build new features at higher speeds and shorten release cycles.

So, every time when you start a new project, especially if you’re unhappy with your development ecosystem, try investigating new ones – technologies are evolving nowadays at much higher speeds. For the past decade or so, the biggest issue in software and web development is maintainability, even before performance. And more importantly, don’t be afraid of change, embrace it.

Categories: Software, Web

Atomic design

January 23, 2018 3 comments

I recently read Atomic Design by Brad Frost. It was like a breath of fresh air – look! someone else is thinking the same, phew, I’m not alone. And not only that, someone else took the time to write a book and formalize everything. I sincerely believe that the book should be a mandatory read (it’s easy and takes only a few hours) for anyone involved in web projects – UX designers, visual designers, copywriters, front-end developers, back-end developers, testers, project managers, CTOs … anyone! Are you in the web business? Then read it.

Why do I liked it so much? Not only because it lays down some very good design principles and offers a common language to it, but mainly because it preaches a mindset change.

And now I come to the point where I want to tell you about my experience in this field. About six years ago, about the same time Brad started to apply the principles in his book, I was working for a major IT company. Mobile web was on the rise, but the company had no presence there. The desktop website was rendering on mobile just as a zoomed out version, making it unreadable and, of course, unusable. So I pitched the idea of building such a mobile web presence to my manager and I was in luck. I was tasked to create a proof-of-concept and the idea caught. One other manager was on board, so this way a new team was born. The proof-of-concept went live so that we can actually get some usage metrics.

Few months later it was another lucky development. Periodically the company was going through brand redesign. And this included the web presence too. Frankly speaking, my case was not exactly like the ones in Brad’s book. The company had structure and clear brand, including web, guidelines. They even had a governance team. But the problems were from another nature. First, to give you an idea of the magnitude, the company website consisted of hundreds of thousands of web pages and tens/hundreds of web applications. A lot of teams, even external agencies were working on these.

These were raising some very interesting challenges:

  • Redesign was tedious, long and expensive. Many people for many months were working on this updating all these. Many times entire sections or applications did not benefit of the facelift, so the same website had different living designs.
  • Because the web guidelines were quite extensive, the learning curve for creating web pages and applications was very steep. Thus high costs.
  • The governance team was overwhelmed and most of the time busy with checking websites if they meet the standards they put in place. Due to this check, going live with a website was delayed, sometimes inexplicably for the developers and stakeholders
  • Special needs of some development teams were almost never addressed so this was ending up in frustration and either going rogue or doing an ok-ish work with the sole purpose of just delivering something

Then I said to one of the UX designers:
– Cool! We have now the chance to do things the right way. We build not only a mobile presence, but a responsive web presence for all the devices and we will build it like a toolkit that can be used by everyone.
– Nah! Building such a toolkit will be very tedious. Not technically, but politically, to get the buy-in of all the high levels involved.
– Then let’s do it at a smaller scale to show them the advantages.
– Yup, here you may have something, let’s do this for mobile.

Then I connected directly with the UX team and asked them to create a list of semantic components. I explained them the concept of semantic design(CSS), its advantages and the fact that it doesn’t add up any work – it’s just a paradigm shift. They were very open (maybe I infected them with my enthusiasm, maybe they wanted to try something new) and they agreed. After just a few iterations we had an entire box full of semantic components. At that time I wasn’t aware of atoms – molecules – organisms, even though we organized them incrementally too. But I think this naming convention is much clearer.

I just want to make a small parenthesis here – the biggest challenge here was finding good names and most of the iterations were related to it. We even applied this when naming colors. I believe if you cannot find a good, SEMANTIC name to stand the test of style, then it would not stand the test of time. Test of style means that the name will make sense even if you completely change its style.

Initially, the components were presented in desktop style, but this was no issue. Creating a mobile style was fairly easy and fast. Only then we jumped to development – and it was a breeze this time. We even used the same names in CSS and the fact that we could reuse styles and components made all that planning effort and mindset change worthwhile.

We ended up with a toolkit in just a few weeks. And this was the output of a team of just 4-5 developers, not fully dedicated to this project. Now, other front-end developers and external agencies were able now to develop mobile pages. But how was this better or faster? First of all they got rid of all the web guidelines, a huge book to read and, if you wanted to be proficient, memorize. Now they had a few templates, a handful of components and just a few pages of documentation. If they stick to those templates and those components, their pages will be compliant. And here is how you get a quite fast approval from the standards team. Also this toolkit had embedded all the quirks of mobile development (back then were much more than nowadays 🙂 ). They were able to test their mobile pages on desktop browsers and have the confidence that they will work on mobile devices too. We, as the development team, took care of all the inconsistencies between different devices and browsers. This also gave them the opportunity to focus on their task at hand – develop mini websites fast and not care about a plethora of devices and associated quirks.

We also had toolkit guidelines, but ours were much simpler: do not introduce any new CSS classes or any new custom tags – just stick to our templates and components. Would you think it’s too restrictive? Not a chance. We also advertised to all our users that we will create ourselves any new components that they might need, if the current ones are not sufficient. And we got a lot of requests. But most of the time, those requests were practically a misunderstanding of the naming convention – they were looking for synonym component. We ended up adding synonyms to the documentation. And to make it official, we always responded with a link to that documentation. Sometimes, creating a new component wasn’t actually necessary, but just tweaking and customizing an existing one. This way the UI toolkit became more powerful and the documentation more comprehensive with each request coming from our users. And the number of requests was decreasing, freeing us the time to extend and improve the toolkit.

For static pages we had templates and components developed in …, actually the name of the tool is not important, but the fact that the users were not starting with a blank page. And the learning curve was not steep anymore. We even had JSP templates and custom tags available for web application development. We also created a transcoder from desktop pages to mobile optimized pages using the same toolkit.

And now I will tell you about two cases demonstrating the power of this approach, cases that stuck into my mind.

Less than a year later, by the time our toolkit became the one stop shop for mobile pages development, the time for a new company-wide redesign came. Most of the components stood the test of time and they just needed a facelift. Probably the most eye-catching facelift was to move from a black background to a white one. I’ll tell you later why I revealed this detail.

After UX and visual designers hand us over the new specs, for us was mostly a matter of rewriting a new CSS. Which we did in less than 3 weeks. When we were ready to go live, the desktop team was simply amazed:
– What? We haven’t finished yet the homepage. But I guess you did it only for a few pages, so it’s not actually ready to go live yet.
– No, we did for ALL the pages.
– Including the transcoded pages?
– Yup!
– For all the countries and languages?
– Of course.
– But you could not go live, we’re not ready.
Then the frustration was on our side and of another kind. But they agreed a compromise – to change the style just on the homepage and all the subsequent pages to remain on the old style. Most probably hoping that this would buy them another 2-3 weeks at least. Next day we came up with this version – in the end was just a matter of conditionally including one CSS or the other.
– But we also want to do some A/B testing and release the new design gradually.
– Ok, no problem, we already have support for this. We just need to know the target users for A and B.
Finally they let us go live with the new version in full. Few weeks later, they managed to release the homepage, many months later approximately 80% of the pages got the new design. A/B testing never happened.

Next day, after we went live with the new design, one of the front-end developers came to my desk and told me:
– I was developing yesterday a mobile page. In the evening I saved it and shutdown my computer. This morning I came, opened it and it turned from black to white. But I swear that I didn’t do anything.
I started laughing and I assured him that it’s fine, that this is the new redesign communicated by the standards team just few weeks back. I have to admit that I was happy about this level of upgrade. But we also sent a communication stating that front-end developers don’t have to do anything to get the new version. They just have to use the same tools, guidelines and toolkit. The replies were almost instantaneously: “We want this for desktop too!”

This is how we got the buy-in for creating a new RESPONSIVE toolkit. But that’s another story.

Update Jan 24th, 2018 (thanks to Meghan)

There are just a few, but very valuable things that I took from the “Atomic Design” book. First of all this book formalizes the entire process and gives pretty good names to everything.

I was using the term of semantic design, but atomic suggests the idea of modularity. On the other hand, semantic clearly states the separation between content and style. We were using the term of components, and even though we were describing them incrementally, I think atoms-molecules-organism makes a much clearer separation and gives a better idea of magnitude.

Another nice idea is the clear separation between UX comps and visual design. If you do this, it will be an additional check that your design system is both modular and semantic.

Categories: Web

PhoneGap setup

March 22, 2016 1 comment

It’s not the first time that I played with PhoneGap, but I haven’t done in quite some time. But I always liked the idea of creating a platform independent application. And if that application can be tested directly in the web browser, even better.

Creating a user interface in a descriptive language like HTML is easier as opposed to a programmatic approach where you have to write code to create your visual components. Nowadays, most frameworks also offer the descriptive approach, usually through XML, but learning a new language when you already know another one more powerful is not that appealing. HTML is also augmented by CSS that easily offers a high degree of customization and JavaScript that comes along with functionality. And all together create a platform-independent framework with a high degree of customization and a clear separation of layers.

So it’s clear why I like the idea of PhoneGap right from the start. Now, let’s set it up.

To develop a Phonegap application you don’t need to many things. The best thing will be to install nodejs and then phonegap: npm install -g phonegap.

Then you can create a sample application with phonegap create my-app, command which will create all the necessary files and subfolders under my-app folder.

Now it comes the testing part and for this you need to install PhoneGap Desktop. As I said, it’s nice that you can test your app directly in your browser by visiting the link displayed at the bottom of Phonegap Desktop window, e.g. (hint: it doesn’t work with localhost or And if you install PhoneGap Developer App you can easily test on your mobile too without the hassle of installing the application itself every time you make a change – changes will be automatically deployed (reloaded).

When you’re done it comes the fun part – actually building the application. Let’s do this for Android.

First you need to install JDK (I tested with version 8) and Android Studio.

And then you need to setup some enviroment variables

  • JAVA_HOME – this must be set to the folder where your JDK, not JRE, is installed.
  • ANDROID_HOME – this must be set to the folder where your Android environment is installed.
  • add to PATH the following %ANDROID_HOME%\tools;%ANDROID_HOME%\platform-tools;%JAVA_HOME%\bin in Windows or ${ANDROID_HOME}\tools;${ANDROID_HOME}\platform-tools;${JAVA_HOME}\bin in Linux

If the above are not correctly set or the PATH is invalid (like it has an extra quote(“) or semicolon(;)) you can run into errors like

  • Error: Failed to run "java -version", make sure that you have a JDK installed. You can get it from: Your JAVA_HOME is invalid: /usr/lib64/jvm/java-1.8.0-openjdk-1.8.0
  • Error: Android SDK not found. Make sure that it is installed. If it is not at the default location, set the ANDROID_HOME environment variable.

I also had to run

phonegap platforms remove android
phonegap platforms add android@4.1.1

By default I had installed Android 5.1.1, but I was getting the error Error: Android SDK not found. Make sure that it is installed. If it is not at the default location, set the ANDROID_HOME environment variable. You can check what platforms you have installed by running the command phonegap platforms list.

Make sure that you have all the Android tools and SDKs installed by running android on the command line and select all the ones not installed and install them.

Finally, you can build the application by running the following command in your project folder:

phonegap build android

and if everything goes well you’ll find your apk at <your-project-dir>/platforms/android/build/outputs/apk.

Categories: Software, Web

Banking apps

October 8, 2015 1 comment

I was thinking few days ago what I want from a banking application. So I decided to write an article from the users point of view and their expectations when it comes to banking applications. So this one is for everyone, not only technical people :).

We have to admit that banks are huge dinosaurs, especially when it comes to their web site and the web application offered to users. And it shouldn’t be the case. They’re making huge piles of money out of thin air, I’m trusting them with my money, at least they should give me some good tools.

At first, there are some features that every banking app should incorporate. You should be able to access any of your accounts, current, savings or loans, have an AGGREGATED status of all these, create new ones in any currency or delete existing, be able to easily transfer between them and any other external accounts. And if I want to transfer money, I don’t care if it’s internal, national or international, I’ll just give the recipient, the IBAN (or any account identification string), the amount and that’s it. Show me the transaction fee (and the exchange rate if that’s the case) and if I accept it, just do it. And the same for scheduled transfers in the future or recurrent (weekly, monthly, yearly) ones. If the transaction fee will change on any of these in the future, deactivate them and just let me know so I can reactivate them.

Direct debits are a must. I like a bank where I have to spend less of my time in the bank. Offline or online. So I should be able to easily setup direct debits like any other transfer – specify a recipient, IBAN and a limited amount per week, month or year. Also any company should be able to easily request debits from my account, through some kind of API. And then I could do this not only for my gas and electric bill, but also for my internet provider or gym subscription. And I can cancel them at anytime or just set an expiration date.

Another must is to be able to associate my card with any of my accounts. Imagine that I go abroad and then I would like to spend money from a foreign currency account. Be able to switch and switch back instantly, when traveling is not uncommon, shouldn’t be impossible or a hassle.

A mobile app with which I can pay, without any credit/debit card is also something that should stay in the ordinary area, not cutting edge tech.

Categories: Web

Shift an array in O(n) in place

April 7, 2015 Leave a comment

Below I copied the code for shifting an array in place.

void shift(Object[] array, int startIndexInclusive, int endIndexExclusive, int offset) {
if (array == null) {
if (startIndexInclusive >= array.length - 1 || endIndexExclusive <= 0) {
if (startIndexInclusive = array.length) {
    endIndexExclusive = array.length;
int n = endIndexExclusive - startIndexInclusive;
if (n  0) {
    int n_offset = n - offset;
    if (offset > n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n - n_offset,  n_offset);
        n = offset;
        offset -= n_offset;
    } else if (offset < n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset,  offset);
        startIndexInclusive += offset;
        n = n_offset;
    } else {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset, offset);

The swap(array, index1, index2, len) method swaps in the given array the elements from [index1, index1 + len) with the ones [index2, index2 + len).

Even though it may seem complicated at first, the idea is pretty simple. If the offset is half of array length, or in other words if offset == (n – offset), where n is the total number of elements to be shifted, then the shift is equivalent of swapping the two halves of the array.
For the first two cases we swapped the portions at the ends and one of them will come in place and we will continue the iteration for the rest as shown in the below figure.

shift algorithm

Space complexity is clearly O(1), but what about time complexity. I’m gonna prove that it is O(n).

Let Sh(n, k) be the problem of shifting k positions in an array of size n and Sw(k) be the problem of swapping k elements in an array. For the sake of simplicity I left out the start and end indices.

It is obvious that O(Sw(k)) = O(k).

It is also obvious that O(Sh(1, k)) = O(1), with k < 1. Also O(Sh(x, 0)) = O(1).

Now let's assume that O(Sh(n, k)) = O(n), whatever n, with k < n. I'll try to prove that O(Sh(n + 1, k')) = O(n), with k' < n + 1.

Analyzing the algorithm we have

O (Sh(n + 1, k’)) = max(
O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)), if 2k’ > n + 1
O(Sw(k’)) + O(Sh(n + 1 – k’, k’)), if 2k’ < n + 1
O(Sw(k')), if 2k' == n + 1

  • First case 2k’ > n + 1

    O(Sh(n + 1, k’)) = O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)) = O(n) + O(Sh(k’, 2k’ – n – 1))

    Because (k’ < n +1) ⇒ (k' ≤ n) and (2k' ≤ 2n) ⇒ (2k' – n – 1 ≤ n – 1) ⇒ (2k' – n – 1 < n), then O (Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Second case 2k’ < n + 1

    O(Sh(n + 1, k')) = O(Sw(k')) + O(Sh(n + 1 – k', k')) = O(k') + O(Sh(n + 1 – k', k')) = O(n) + O(Sh(n + 1 – k', k'))

    Because (0 < k') ⇒ (n + 1 – k' < n + 1) ⇒ (n + 1 – k' ≤ n) and (k' < (n + 1)/2) ⇒ (k' ≤ n/2) ⇒ (k' < n), then O(Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Third case 2k’ == n + 1

    O(Sh(n + 1, k’)) = O(Sw(k’)) = O(n)

So O(Sh(n + 1, k’)) = O(n). As a consequence O(Sh(n, k)) = O(n), whatever n, with k < n.

The code will become part of commons-lang, ArrayUtils class, as of version 3.5.

Categories: Web

WebRTC saga

February 5, 2015 2 comments

Recently I started using WebRTC. Cool technology. You can build a web chat in just tens of lines of code, you can take pictures with your webcam and using canvas you can manipulate them. And these are just a few examples of what WebRTC can do for you and your web app.

Tens of lines of code indeed. But not easy lines at all. Even though WebRTC will enable peer-to-peer communication, you still need some server components. These server components are involved in the session initiation.

First there is a ICE/STUN/TURN server that it’s used for a client to discover its public IP address if it is located behind a NAT. Depending on your requirements could not be necessary to build/deploy your own server, but use an already public (and free) existing one – here‘s a list. You can also deploy an open source one like Stuntman.

Then it comes the signaling part, used by two clients to negotiate and start a WebRTC session. There is no standard here and you have a few options.

You can use an XMPP server with a Jingle extension. Developing your own XMPP server (or component) just for this will be an overkill, so you should definitely consider using an existing one. But even installing, configuring and integrating an existing one could be too much if you don’t use it for anything else. But if you already have one in your infrastructure, you could hook up into it. On the client side, you can use an existing XMPP JavaScript library.

You can also use SIP, a protocol much more encountered for VoIP. Like XMPP, SIP it is too much to be used just for WebRTC signaling. If you already have it in your infrastructure, then could be easy to use it. As for SIP, in Java you have two development options.

The high level one is SIP servlets with its most notable implementation, Mobicents. If you develop a non-GPL application, Mobicents commercial license could be quite expensive just for WebRTC signaling.Mobicents is developed on top of Tomcat or JBoss, so, depending on your environment, this could also be a drawback.

The lower level option for SIP is JAIN-SIP, on which even Mobicents is developed on. It is completely free. There you have different protocol options, like TCP, UDP or websockets. And for WebRTC the latter will be more appropriate.

With both options, you’ll still have to develop the server logic and SIP is a pretty cumbersome protocol so prepare yourself for a few headaches.

If you decide for SIP, on the client side, things could be a little bit brighter. There are already JavaScript SIP signaling solutions that you can easily integrate into your web applications. I looked into JsSIP and SIPJS and I ended up using the latter. SIPJS is actually forked from JsSIP, but it encapsulates the intricacies of the protocol better, which makes it a little bit easier to integrate.

Another option is to develop your own signaling protocol using something like websockets. Then you have to develop from scratch both the client and the server side part. Developing the server side part, in my opinion, will be easier than the previous options. You have to practically develop a messaging system, that forwards messages between users, without caring too much about the content. @ServerEndpoint abstracts development in an elegant manner and you can even use your existing authentication system as you can bind a websocket session with the HTTP one.

If you’re worried about websockets browser support, don’t. Wherever WebRTC is supported, Web Sockets are too.

In the client side, things will be a little bit more complicated, as you will have to develop the entire message flow, the server will just forward your messages to the appropriate peer. This entire flow should be asynchronous, something like a request-response paradigm.

The development difficulty arises here mainly from the different behavior on various browsers. And by various, I mean Firefox and Chrome, the main ones supporting WebRTC.

When it comes to WebRTC you don’t have to code anything for actually streaming, but only for initiating the session. Theoretically, WebRTC session initiation process is as follows. Let’s suppose A wants to talk B. And for the sake of simplicity we will use A signals B with the meaning that A sends a message to B through the signaling channel.
1. A will create and initialize an RTCPeerConnection. Few things involved here: a list of addresses of the ICE/TURN/STUN servers, a local stream to share with B.
The list of ICE servers is passed in the constructor, the local stream is obtained through navigator.getUserMedia() and added afterwards.
2. Using that connection A creates a SDP(session description protocol) offer.
3. A signals B the offer
3. When B receives the SDP offer from A (or even before) will create the RTCPeerConnection like in step 1.
4. B will set the remote session description like RTCPeerConnection.setRemoteDescription(new RTCSessionDescription(SDPOfferFromA)), where SDPOfferFromA is the JSON object received through the signaling channel.
5. B will create a SDP answer.
6. B signals A the answer
7. When A receives the answer, sets the remote description like in step 4, using SDPAnswerFromB.
8. Asynchronously, when a new ICE candidate is discovered and presented through the onicecandidate event of RTCPeerConnection, A (or B) should signal it to B (or A). When the other party receives it, B (or A) will add it using RTCPeerConnection.addIceCandidate(new RTCIceCandidate(candidateDescriptionReceived)).

Again, theoretically. In practice …

First of all, because WebRTC is not final, but in draft state, all the types are prefixed, like mozRTCPeerConnection (in Firefox) or webkitRTCPeerConnection (in Chrome and Opera). But this can be easily fixed.

window.RTCPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;


Then it comes the ICE candidates part. In Firefox the ICE candidates must be gathered before creating an SDP offer or answer. So before creating an offer/answer, check RTCPeerConnection.iceGatheringState and do it only if it is "complete".

There are also few inconsistencies when it comes to the media constraints passed to navigator.getUserMedia().

In the end, I can conclude that WebRTC is really cool and you can build media application in matter of tens or hundreds of lines of code. I guess it could have been designed to be easier to use. Even though there are a lot of opinions stating that the signaling protocol is better left out of the standard, I still think that it will be better if an optional one is included. Or at least a minimal easy to implement API.

Categories: Web Tags: ,

Choosing an XMPP server

January 28, 2015 1 comment

I was working lately with WebRTC. One of the biggest issues there is the signaling part.

As an option you can choose XMPP with its Jingle extension. So, naturally I was looking into a few XMPP servers. What are the requirements that I was chasing for these?

My architecture is a Java based, so I was looking into solutions based on this technology. I know that I can integrate different technologies through things like LDAP or web services, but … If I needed something custom, it would have been much easier to develop something in Java. So as a nice to have, the final solution should be extensible through some kind of plugin system. Another requirement was to be open source or at least affordable.

Taking these into account I narrowed down the list to OpenFire, Tigase, Apache Vysper and Jerry Messenger.

Apache Vysper aims to be a modular, full featured XMPP (Jabber) server. Unfortunately, it is not out-of-the-box to feature an easy installation and integration procedure. It also seems to be in a beta stage, not production ready.

Jerry Messenger has an embedded Jetty server, it can be easily configurable and it features a plugin-able system.

OpenFire seemed the most complete solution in its field. It has a very rich web admin interface, a plugin system and extensive documentation. As a bonus, it is just a web application so you can install it in your favorite web application server along with your other web applications.

Tigase claims to be the most scalable XMPP server supporting hundreds of thousands of concurrent users. It is configurable, standalone and plugin-able. Unfortunately the documentation is not as extensive.

Clearly, my preference went towards OpenFire and Tigase. But I ended up not using XMPP at all for WebRTC signaling. Why? All about it in the next article.

Categories: Web

Building enterprise web sites

August 26, 2013 Leave a comment

Responsive design is one of the latest and hottest topics in web design and development. Creating a responsive site, a site that offers a decent to good user experience, a site that follows accessibility guidelines it’s not easy, but it’s not that hard after all. It could take few iterations, but you’ll get there. But doing it at an enterprise level, that’s a whole different game.
First of all, what’s enterprise bringing to the table that hardens things? I’ll clarify just to know where we stand and to clearly define the requirements.

To start, enterprise usually means big. So we have volume. In terms of pages, visitors, infrastructure, human resources. Let’s see how all they’re influencing.

Lots of pages. This is probably one of the most important aspects. If you’ll develop each page individually, on its own, then you’ll waste money and time. And by the time you develop the last page, the first one becomes obsolete and needs to be redone. Of course, I assumed that you’ll want the same user experience (look and feel) on your entire website, which is the smart thing to do anyway and I won’t enter here into details why. So, coming back, there is clearly the need to reuse code in order to reduce development and maintainance effort and easily update all pages at once.

Lots of visitors. And this brings with it the need for performance. At every level: network and server infrastructure, code and process. Infrastructure is the foundation of your site and it’s also the main reason for stability (or instability) or, in other words, uptime (or downtime). If infrastructure is alright, then performance comes down to code. And on the web, this translates into server side execution time, number of requests, response size and client side execution time.

Big infrastructure. When talking about huge infrastructure, we will definitely see a heterogenous environment: different server-side technologies and different ages. So we will have outdated environment, outdated code, we will need to deploy to multiple environments simultaneously.

Lots of human resources. As many people will be involved in the development, there should be clear process and extensive documentation. Clear process will reduce the mistakes and inconsistency, while increasing development efficiency. Extensive documentation will decrease the learning curve and the need of direct support from developers. Also you should take into account that people with different backgrounds and skills will all work together. If we’re able to make development easier we will be able to reduce costs, either by reducing the need for high level skills or reducing the development time.

Now that we’ve seen what an enterprise site means and what are the basic requirements, I will just go through a list of good practices. The list of requirements could actually be bigger than the ones above, depending on your business needs. Also the following best practices are focusing only on client side development.

Architecture: Use a modular architecture

Modularize as much as possible. Besides a core framework, everything should be described and implemented as a component. Components will be assembled together to create pages, microsites or web applications. This will lead to many advantages. Different teams can implement different components. The user experience can be changed in its entirity or partially, but the most important aspect is that it can be done globally. Updating a component will update all the pages using it.
Beside the component developer, we identify the role component user, which is the person that will assemble a page, microsite or web application from the existing components.

Architecture: Use a governance process

Even though the component development process can be easily descentralized, ensuring consistency and avoiding development overlaps should be done centrally through some kind of governance process. Representatives from all teams should be involved. By consistency, I mean using the same technologies and architectural patterns for the same purpose. Overlaps can mean implementing the same components twice or components very similar in nature. E.g. if you implement an autosuggestion input component for cities, while you have already an autosuggestion generic component, then you’re overlapping. It’s better to find a way to integrate a city suggestion service into the generic component.
Before starting to implement a component, check to see if you cannot use an existing one, even though it means some degree of customization.

Architecture: Reuse

As I told one of the main reasons of a governance is to reuse as much as possible. Either we’re talking about your code or third party libraries. Do not reinvent the wheel, it will always be round and, unfortunately, maybe not from the first tries.

Architecture: Adopt the new

As a rule of thumb, always prefer to use the new version, either we’re talking about a standard or a JavaScript library. This way your site will take better the test of time. Of course, the problem is the browser support (read IE support), but there are usually workarounds.
Use the new HTML tags instead of the deprecated ones. STRONG is preferable to B, as it is related to semantic instead of layout.
Use CSS transitions instead of JavaScript animations. CSS transitions gives better performance (as they could make use of GPU), but JavaScript animations could be used as a fallback, when browser don’t offer support.

Architecture: Create extensive documentation

Document the development process, patterns, choice of technologies. Heavily comment your code. Document all the components and create some kind of easy browsable index. Consistency should be a concern when it comes to documentation too, by using the same style and tools throughout all teams.
Extensive documentation will reduce the learning curve, especially for new resources. It will also reduce the developers involvement in the deployment and support for component users.

Architecture: Encapsulate JavaScript/CSS into easy to use components

I mentioned earlier about different roles in your organization. Could be that the users of your components don’t have a high degree of knowledge in JavaScript and/or CSS (Please keep in mind that JavaScript is the most misunderstood language). E.g. YahooUI is great library, but it requires JavaScript knowledge. Same goes for jQuery components. Instead of this approach, or actually over this approach, you should use a very simple principle: convention over code. Just to take out the JavaScript part out of the equation you could use CSS classes to annotate, by convention, your component. E.g. instead of $("TABLE.whatever").tablesorter() you can simply use .sorted CSS class and include $("TABLE.sorted").tablesorter() in your main JavaScript as an onload event. Of course, this will impact the initial load performance, but there are always trade-offs :).

Performance and maintanability: Reduce the number of global resources

Reducing the number of global resources will reduce the number of requests and improve performance. But performance is not the only concern, maintainability is there too. It’s easier to deploy one resource and it’s easier to update one reference. By resources I mean stylesheets, JavaScript files, images, fonts etc.

The number of stylesheets and JavaScript files can be easily reduced to one. In case of CSS, frameworks like Less or Sass will be of great help and in case of JavaScript, frameworks like AMD. These frameworks will help you modularize and organize your development environment, while you will still be able to deploy only one resource into production. It makes it easy to integrate even third party resources.

When it comes to images, icons more specifically, icon fonts is the key. These are supported on many browsers, reduces all icon requests to one, they’re scalable in size and color. The only disadvantage is that you cannot have multicolor icons. You can combine icons to obtain this, but this could become cumbersome.

Naming: Prefer short, but meaningful names

As for the naming try to use generic, but meaningful names for everything, from resources to CSS classes. E.g. for resources prefer using “global.css” instead of “global_v3_blue.css” as you should stick to them for a long time. This way moving to a new version you will still have a meaningful name. Also make use of server side processing if you want to personalize the experience for different users, geographies, browsers etc. E.g. if you want to have two different CSS, one for mobile devices and one for desktop browsers, then refer to both as “global.css”, but do a server side redirect based on device detection.
As for the CSS classes, I already wrote about this subject.

Naming: Use the semantic of HTML tags

Use the semantic of HTML tags instead of defining new CSS classes, where appropriate. It is preferable to use H1 instead of .title and H2/3 instead of .sectionTitle.

Naming: Use as few CSS classes as possible

This falls into the same reuse category. It will be harder to remember which class is for what if you have hundreds of them and when to use one over the other. Prefer using contextual references in CSS selectors or combining existing classes rather than creating new classes. E.g. do not create a new class .right-menu, but use two classes .menu.right. Of course a better naming will be .menu.contextual.
So reuse comes even to CSS class names.

These are simple rules, but following it’s not always easy. But they will help you get an easy maintainable website so you can easily react to industry changes.

Categories: Web

Innovation Summit 2013 in Bucharest

May 30, 2013 Leave a comment

Innovation Summit just happened in Bucharest this week. In a few words: some interesting presentations, given in a more or less attractive and engaging way. Most of the things were not new, but it’s nice to hear them again as a confirmation and most importantly presented in a different way. Also, a lot of study cases were presented which always brings confirmation to the speaker’s ideas and even yours if similar.

Petru Jucovschi, Technical Lead for Windows and Windows Phone – The next level of Digital Innovation This was a presentation about innovation in user interfaces with a strong focus on the so called Metro interface for Windows 8. You can read more about it here or here. Embedding HTML5/CSS3/JS into the OS and as a application platform was presented as an innovative approach. It isn’t. Palm had webOS years ago. Indeed, HP killed it, but this was on the market years ago.

Meldrum Duncan, Innovation Consultant, Founder ?What IF! US – Oh, the mistakes I’ve made: 13 years of running innovation projects. Clearly an engaging speaker and an attractive presentation. He focused on 10 simple principles of driving innovation. Among the principles were also the ones that engage collaboration outside the office, like “Drink beer!”. Yes, yes, to engage with other colleagues, stakeholders, clients in a friendlier environment, which will open new thinking.

Windahl Finnigan, Associate Partner, UX Innovation & Strategy Director for Smarter Commerce at IBM – Full Tilt Boogie of Disruption was a too corporate speech for my taste. Even though with some good and interesting examples, the presentation was less engaging and more like a flat line.

Matt Rosa, Innovations Director at BAT – An Iterative Consumer-Centric Approach to Product Innovation discussed about tobacco industry innovation with a focus, of course, on BAT. Innovation on this field was close to zero up until 1999, but it started to cover multiple topics since then. From a process standpoint he brought the idea of going outside the organization to external agencies for a fresh and different view. Personally, I’m just glad, that with all the innovation, the base of tobacco customers is shrinking each year.

Martin Hablesreiter, designer / artist / filmmaker at honey and bunny studio – Food Design is more important than politics. The innovative capacity of food design. I liked the presentation style, the content, the slides, but for me, food is just food and what it counts is its nutritional and health value. Martin discussed in an ironic way the many influences on food design: functionality, culture, transportation, economic feasibility etc. But imagine the visual impact of the photos on the slides right before lunch :).

Susan Choi, Director of Innovation at Mandalah – How the heart and pulse of culture can lead companies to culturally relevant and successful innovation? was an advocate for local research before jumping to conclusion or using any well known stereotypes. As a case study she used a campaign launched in Middle East, where, surprisingly, they acknowledge an entire youth movement towards the digital space and into innovative solutions.

Daniel Jurow from R/GA started his presentation with a bit of his company history and how they expanded.
I clearly remember one of the slides: “93% of executives say their long term success depends on their ability to innovate, but only 18% said that heir innovation strategy works.” The speaker said that is happening because they are not taking bold steps, but smaller safe ones. I would say that this is the result when you base your innovation and related decisions on spreadsheets.

Instead of the old formula for growth either through horizontal (line extension backed up by a serious mass-media campaign) or vertical integration, Daniel debated the concept of functional integration. Functional integration is the new idea to extract value of your existing ecosystem. In plain English, reach your existing customers and sell them additional products and services – maybe some they don’t even need, but they’re so cool :). Examples are clear: Apple, Google and at the summit I heard about BMW and Nike. Yes, Nike with Nike+ which allowed them to gain a huge market share. As a runner, I like minimalist shoes and trail running, and Nike don’t have options there, so I don’t use them. But it seems others really like to share their exercises on Facebook, so …

Of course, the only integration platform for end-user customers is the web.

In the end, the speaker was recommending to not restrict innovation to a single group, but encourage a culture of innovation in the company, an idea reiterated by almost all the speakers after.

Erez Tsalik
If I said in the beginning that most of the things presented. Not the case here, at least not by approach. Erez contradicted almost everything the other speakers said and he took a clear analytical approach to the process of thinking innovation. But he also reiterated that innovation needs commitment throughout the company, starting with the executive levels. Managers needs to know how innovation works and how to manage innovative people, because usually managers do not know to distinguish between a good innovative idea and a bad one. That’s the reason they should not validate innovative projects and ideas.

He discussed some of the classical approaches toward the process of innovation (if there is one). Brainstorming seems not to get the best results, because people being separated generate more ideas than together. Why? They don’t waste time listening to others. But I would think that brainstorming is good as a next step to validate the ideas and select the feasible one.

Think out-of-the-box! Or better NOT! Out-of-the-box means no rules, but with businesses constraints exists. Constraints enhance innovation, it is said that necessity is the mother of all inventions.

Another interesting topic was fixedness – symmetrical, structural, functional etc. Which is normally good. But if you want innovation you have to make an effort and break it.

Altogether, an interesting and intriguing presentation with lots of good examples. The kind of presentation that makes you think: “But what if I try?”

Alexandru Cernatescu, Co-Founder, CEO & Head of Strategy at Infinit Solutions Agency – Romanian Reality Innovation Update – What stands behind a Romanian story of success based on innovation?. Unlike the other speakers, Alex didn’t focused on innovation process, study cases or ideas. He presented, in a very dynamic way, how innovation was the basis of his entrepreneurship and how with will and abnegation can overcome obstacles and write yourself a success story.

As participants, we also had the opportunity to play with innovative products: transparent displays, latest laptops that double as a tablet, Windows phones. And we even took part in molecular food demo with Bernd Kirsch, Executive Chef of Radisson Blu, as an innovation in gastronomy. The opportunity to sample the end results was clearly a delight for all of us.

I would like to finish with a few recommendations that I liked and were shared by most of the speakers. Innovation is not limited to a small group in the company, but it should be encouraged as part of the internal culture. Managers (especially executives) should not be pose as innovation judges, as this is most probably their main focus and strength. The innovation process, if any, is ever changing and it should be part of the innovation itself.

Categories: Web