Debug end-to-end(e2e) tests in VSCode

February 15, 2022 Leave a comment

While using TypeScript and Angular I also made the switch a while back to VSCode. I was previously using Eclipse. Eclipse is a great IDE, but I wanted to try something new and VSCode was gaining a lot of momentum at that time. A dedicated IDE sounds like a good idea and an IDE built on a browser engine sounds exciting, right?

Nowadays, development speed increased tremendously but this came with a lot of challenges. Maintain a high development speed and high quality is important, but not easy to achieve. Automated tests is definitely of great help. Unit tests are integrated in most major current web frameworks, like Angular. Unit tests are great for testing micro-functionality of classes and components, but not sufficient. End to end tests are the next steps and helps you test your applications as a whole, but also are more complicated to write.

We chose CucumberJS as an e2e testing framework for a few reasons. One is the fact that you can write tests in a more human friendly language. Please keep in mind that this does not mean that are easier to develop, but somehow easier to understand. For each Cucumber step you still need to write TypeScript code, but you can have a clear separation between the description of the whole test and how exactly this test translates into browser actions. So non-technical people can review tests and, to some extent, write new ones. These new tests will still have to be review by developers and, at least partially, backed by actual code, but they’re a good starting point. This way, Cucumber and similar BDD frameworks can offer a common ground between developers, testers, designers and business analysts.

On the other hand, as Cucumber is more human friendly and less organized and structured, these tests can become cluttered and harder to maintain. It will be so nice to have something like the GitHub Copilot to help you eliminate duplicates in the Cucumber steps. First to suggest existing steps to avoid duplication and later on to detect duplicates and suggest ways to eliminate them.

As I pointed out, every Cucumber test step must be backed by actual code which translate it into browser actions. As the underlying framework we chose Protractor. At that time it was the integrated Angular e2e framework and, even though Cypress is gaining a lot of momentum, I still see a lot of good value in Protractor too. Anyway those two frameworks are pretty similar and I guess you can easily migrate between the two.

What linked Cucumber and Protractor together was protractor-cucumber-framework which was fairly easy to setup, but this might make the subject of another post.
So we have Protractor, configured out-of-the-box in Angular, we have protractor-cucumber-framework which adds Cucumber as another layer on top of Protractor, where we can develop e2e behavior driven tests in a human friendly way.

Not about this I want to talk about :). What I want to tell you is how you can debug these tests in VSCode. Which as I realized was not easy thing to setup. But as soon as you made the setup, everything works like a breeze and you can easy debug e2e tests in VSCode. Debugging is an important part of the development process and not having access to a debugger makes your developer life much harder.

The exact configuration that you need in your .vscode/launch.json file is

	"name": "Debug e2e",
	"cwd": "${workspaceRoot}",
	"internalConsoleOptions": "openOnSessionStart",
	"preLaunchTask": "npm: e2e-compile",
	"type": "node",
	"request": "launch",
	"program": "${workspaceRoot}/node_modules/protractor/bin/protractor",
	"args": [
	"env": {
		"DEBUG": "true"
	"sourceMaps": true,
	"outFiles": [
	// "skipFiles": [
	// 	"<node_internals>/**"
	// ],

There are a few interesting things. "type": "node" seems to be the only option working here. I also tried with pwa-node, but without any luck.
Another thing is that if you write your Protractor tests in TypeScript you usually register ts-node for transpiling. But because ts-node does this in memory you cannot use here, as there should be a link between the debugged source and the compiled output, So the workaround here is that if you want to debug protractor tests you must first compile them (this is what the preLaunchTask does) and then use the resulting .js files and associated source maps (this is what sourceMaps and outFiles configuration options are for). Furthermore, e2e-compile is a script in your project’s package.json

"e2e-compile": "tsc --project e2e/tsconfig.e2e.json"

There’s no big difference between your tsconfig and the one that you use in your project, main difference being the use of CommonJS module format.

	"extends": "../tsconfig.json",
	"compilerOptions": {
		"module": "CommonJS",
		"types": [
		"incremental": true

With the above changes you’ll be able to debug Protractor tests in VSCode. Please keep in mind that, even with this setup, you cannot set breakpoints in Cucumber tests, but you can set breakpoints in the equivalent .ts files describing these test steps. Almost as good.

Categories: Web Tags: , , , ,

px vs em vs rem vs …

October 21, 2019 Leave a comment

In this article I will try to summarize the pros and cons of using px vs em vs rem.
First of all let’s clarify some definitions.

A pixel is the atomic division of the screen (or media support) and it consists of red, green and blue sub-pixels. But this is not a CSS pixel, this is a hardware pixel. A px(pixel) or CSS pixel is actually a reference pixel and it’s a relation between visual angle and distance and device pixel density. I won’t go into details – you can’t find them here. The idea is that a CSS pixel is not always a pixel on a screen, but more like an optical illusion of an atomic division.
A pt(point) is a measurement unit traditionally used in print media (anything that is to be printed on paper, etc.) and equals to 1/72 of an inch.
An em is a measurement unit relative to the font size of the element it refers too. It’s not known what exactly em stands for, but it is believed to come from the letter M (spelled em) of which size usually accommodates all letters in a font.
A rem is a measurement unit relative to the font size of the root element. It is very much alike em, even the name says it so – root em, but it has a different point of reference.
A %(percent) is a measurement unit relative to the font size of the element it refers too.

px and pt are absolute unit lengths and em, rem and % are relative unit lengths. What’s the biggest difference between relative and absolute? Relative is scalable according to their reference point (current element or root element) as opposed to absolute which does not scale at all.

Then what should we use? What’s the “perfect” measurement unit? Well, like in the real world, there’s no perfect. So, you have to consider a few things before making a decision. And remember that you can use these units for everything: margins, width and height of elements, image sizes, paddings, media queries etc.

Pixel perfect design versus support for a plethora of devices

If you design a printed report (yes, it’s possible with HTML :)) and you want to look beautifully on paper, then use pt. That one is a typographical measurement and it will be the most appropriate one.

If you want to design a static page that will render beautifully (I mean pixel perfect) on one device only, then use px. But then it will render on one device only and far from beautifully on other devices. It could still be useful if you want to design for example a beautiful static ad to be used in desktop browsers.

Why? First of all, because of what I said earlier, pixels are not actually atomic subdivision of a screen, but rather optical reference units.
Can I patch this somehow? Kind of. Nowadays, browsers have a zoom feature, but this approach has two downsides. You have to rely on user to zoom in-out your page. You can also set programatically a zoom (with CSS zoom or transform/scale), but you have to know the exact value for each device. And this will not mean that your page will render beautifully, but rather that it will fit. And then it still could be too small (if you designed it on a larger device) or too large to be visible.

When should you use relative measurements, like em, rem or %? When you want to create web applications (which consists of dynamic pages) with support on different devices. But there’s a downside here too – you could lose some of that pixel perfect look. There is an old concept here, back from the days of Netscape Navigator, liquid layout, which I think it still can suit most needs. The idea is to develop a layout that can easily fit in different screen resolutions, mostly accomplished by using relative positioning and relative sizes instead of absolute ones. I wrote about this in a previous post.

Let me clarify with an example. Let’s suppose we have a paragraph with a padding of one character. Supposing you have a font size of 10px, this will be translated in pixels like this:

p {
padding: 10px;

or with em/rem like this

p {
padding: 1em;

But if the font size changes to 24px, the second case will scale as designed as opposed to the first case when the padding will become just too small (less than half of a character)

Browser text size and accessibility

Web Content Accessibility Guidelines defines the success criteria for making a web page accessible. I would like to refer here especially to SC 1.4.4 which refers to the capability of a web page to be readable (I would say viewable) on different text sizes. The recommended techniques for doing so are to use percent, em or named font sizes. Please keep in mind that these techniques are just recommendation, they’re not necessary.

To be more clear I will turn to a common browser feature – setting text size. I see this kind of feature especially useful on e-readers, but also on mobiles. And now take the example above with the padding. If a user sets the text font to a larger size, that padding will become just too subtle to distinguish beteween paragraphs. Or just imagine an emoji image, that will become less than half of the text size and screw up the entire alignment. I know, I know you could still use icon fonts and special characters for emojis, but I was just making a point about an inline image.

If web accessibility is not a concern for your users, then px is as good.

Floating point precision

All these CSS units, either relative or absolute, are capable of handling floats. Yes, 0.5px could actually make sense. Remember that a pixel is not a hardware pixel, but an illusion and it could actually represent 1 or 2 hardware pixels on a device with DPR of 2 or 4. Here the winner are relative units like em, rem or % because they provide better support for floating point precision.


It’s clearly, even from the example above, why CSS based on relative unit measurements are easier to maintain than the ones based on absolute ones. If you change in one place, everything else will scale automatically. If you use absolute units you have to change everywhere.

Nowadays, SASS(SCSS) has become a de-facto standard for developing CSS, especially in web applications. And if you use semantic CSS sizes, which I would recommend it anyway (I already gave you this blog post) this could be solved.

How does this translate in code? Instead of using padding: 16px just define a SCSS variable $medium: 16px and use that one padding: $medium Every time you change the $medium value, everything will be updated. Keep in mind that too many sizes or size with names not semantically chosen will just clutter your code and increase maintainability in time.

Remember that using Sass does not solve any of the issues above.

Media queries – are they special?

What should you use for CSS media queries? To help you make a decision I will translate this into plain English. Let’s say you want to make a decision between screen sizes, how would you like to say: “I want option A if the screen can fit 80 characters and option B for more” or “I want option A if the screen can fit 600 optical illusions of a pixel or option B for more”? As you guessed it first option is for relative(em or rem) and second is for absolute(px).

Relative, but root or contextual?

Now, if you decided for relative units, what should you use em or rem?

Well, again, your decision. But I will try to simplify it for you with few examples. Let’s say that you have a reference to a footnote, implemented by the sup tag. How do you want this to be rendered? Always with the same size or relative to the neighboring text? Let’s suppose that the heading has a double font size that the root one (h1 { font-size: 2rem; }). Do you want the footnote reference to be doubled too or you just want it as the same size as in paragraph? If you go for the former, use em (sup { font-size: 0.6em; }) or use rem (sup { font-size: 0.6rem; }) for the latter.

Now taking the heading in the example above. If you place it in the footer, do you want it the same size as the one in the content, or smaller according to the footer font size? If you want it smaller, then drop the rem as in the above example and use em instead.

Let’s consider a more complex example, some extra information paragraph implemented with the aside paragraph. Do you want this to render the same all over your page or to scale, let’s say scale down if you include it in the footer? The answer is simple – rem for first option, em for second.

It’s also an option to combine these two.

Fortunately, even though it is a more recent unit, rem has support in all modern browsers. So this is not an issue.


There’s no perfect measurement unit to be used in your pages and you’ll see lot of examples for both. There are a lot of advocates for the relative ones, just because responsive web and support for different devices and browsers has become a priority nowadays. I believe it’s essential to understand the differences, pros and cons between them and then make a decision. Also choosing a combination of these is also a viable option.

Categories: Web

A new project with TypeScript and Angular

July 2, 2018 Leave a comment

More than a year ago, I started a new adventure in a new startup company. New company, new adventure and a new project. New technology maybe?
Of course the risk of adopting a new technology in a new project is lower than migrating to a new technology in an existing one, but there still is a risk. Especially if the technology is young and almost no one in the team has experience with it.

I’m working in web projects for almost twenty years and with JavaScript for all this period. It is said that JavaScript is the least understood language. And even though you understand it you need a very high level of discipline in designing your application and writing you code if you want to keep away from spaghetti code. One of the biggest issues with JavaScript in my view is that it’s not a strong type language. In the past in my code I even tried to bring classes in JavaScript. But this solves the problem only partially.

You can understand my enthusiasm when I saw TypeScript. A strong typed language for the web. Yoohoo! And an entire framework built on top – Angular. Angular, not AngularJS. I worked with both frameworks, but basically what they have in common is the name. Angular is also known as the next version of AngularJS, or Angular 2, 4, 5, 6 …

Now coming back to the project. I proposed for it as the development language/framework the new TypeScript/Angular. At that moment it seemed like a big risk: no one in the team was used it before and even myself I have used in only couple of projects, none of which made it into production. But now, in retrospective, I believe it was one of the best decisions when it comes to technology selection for a new project.

I would not insist too much about TypeScript and Angular, but I still would like to point out a few advantages that I really like to make my case.


TypeScript it’s a strong typed language for the web with a lot of similarities with JavaScript. It’s not an interpreted language, but a hybrid one that compiles to JavaScript. This way you’ll catch a lot of errors right in the development phase, even better they’ll be flagged by your favorite IDE/editor.

Looking into the future, I think new projects and libraries should be written in TypeScript, even the ones targeting JavaScript. TypeScript is interoperable with JavaScript, the code compiles to JavaScript and the library is augmented also with type information for TypeScript users. The compiled script is optimized, obfuscated and easy to integrate. JavaScript acts like some kind of assembler code in this case.

A lot of the TypeScript improvements came to JavaScript through the latest ECMAScript standards, but not all are widely supported. There are also initiatives for native TypeScript support directly in the browser. But I would still see quite a few advantages from the ones outlined above still standing in a hybrid approach (compiled + interpreted).

In conclusion, I believe TypeScript is the modern and the best choice when it comes to programming languages for the web. It’s so cool, that sometimes I cannot believe it was made by Microsoft. Of course, it was a joint effort and maybe this approach will make them think about their future in a more and more open community.


Angular is the perfect companion as a framework for TypeScript. Its component-ized approach could seem an overkill in the beginning, but in an enterprise project you’ll quickly see its value. Components can be easily isolated and reused. It’s so easy to develop such a component, that sometimes could be easier to develop your own from scratch than customize an existing 3rd party one. Of course, this should be the exception, rather than the rule :).

As I said earlier AngularJS and Angular have basically only the name in common. Because of that it’s pretty hard to upgrade from the former to the latter. Quite the opposite is to upgrade between different versions of Angular as they maintain a high level of backward compatibility and features are deprecated progressibely. Usually it took me just a few hours to upgrade from Angular 2 to 4, from 4 to 5, from 5 too 6. The fact that TypeScript is strong typed, the compiler, or even better the IDE, points out the errors, making it extremely easy and straightforward.

Of course, a homogeneous product is the ideal case, but those are so rare … We had to integrate our project with an existing one built on AngularJS. It was like a case study – how to upgrade and interoperate between Angular’s. Angular came with a nice rescue solution here and with a decent effort we came up with a clean way of doing this. I will not enter into details here, but the nicest part in that, which definitely gained my vote, was that you could actually upgrade module by module, or even component by component. And the effort finally paid off when we started to reuse parts of the new project into the old one.

If you want to start a new project, Angular is a very well equipped framework that comes out of the box with TypeScript linter and compiler, webpack, SCSS support, unit and automation testing, polyfills etc. AngularJS did not have an official scaffolding tool, but Angular has Angular CLI which does a nice job.

TypeScript and Angular offered us a development landscape with emphasize on ease of development, less errors and lots of reusing opportunities. I think it was the best foundation on top of which we could build a modular toolkit, based on atomic design principles. We also managed to create a continuous build system, where the code was lint-checked and compiled for different environments, catching a lot of issues, right from that phase, a much harder or even impossible endeavor with JavaScript and other frameworks. We also integrated unit and automation tests and we’re working on extending the coverage of these. This will give us the confidence to build new features at higher speeds and shorten release cycles.

So, every time when you start a new project, especially if you’re unhappy with your development ecosystem, try investigating new ones – technologies are evolving nowadays at much higher speeds. For the past decade or so, the biggest issue in software and web development is maintainability, even before performance. And more importantly, don’t be afraid of change, embrace it.

Categories: Software, Web

Atomic design

January 23, 2018 3 comments

I recently read Atomic Design by Brad Frost. It was like a breath of fresh air – look! someone else is thinking the same, phew, I’m not alone. And not only that, someone else took the time to write a book and formalize everything. I sincerely believe that the book should be a mandatory read (it’s easy and takes only a few hours) for anyone involved in web projects – UX designers, visual designers, copywriters, front-end developers, back-end developers, testers, project managers, CTOs … anyone! Are you in the web business? Then read it.

Why do I liked it so much? Not only because it lays down some very good design principles and offers a common language to it, but mainly because it preaches a mindset change.

And now I come to the point where I want to tell you about my experience in this field. About six years ago, about the same time Brad started to apply the principles in his book, I was working for a major IT company. Mobile web was on the rise, but the company had no presence there. The desktop website was rendering on mobile just as a zoomed out version, making it unreadable and, of course, unusable. So I pitched the idea of building such a mobile web presence to my manager and I was in luck. I was tasked to create a proof-of-concept and the idea caught. One other manager was on board, so this way a new team was born. The proof-of-concept went live so that we can actually get some usage metrics.

Few months later it was another lucky development. Periodically the company was going through brand redesign. And this included the web presence too. Frankly speaking, my case was not exactly like the ones in Brad’s book. The company had structure and clear brand, including web, guidelines. They even had a governance team. But the problems were from another nature. First, to give you an idea of the magnitude, the company website consisted of hundreds of thousands of web pages and tens/hundreds of web applications. A lot of teams, even external agencies were working on these.

These were raising some very interesting challenges:

  • Redesign was tedious, long and expensive. Many people for many months were working on this updating all these. Many times entire sections or applications did not benefit of the facelift, so the same website had different living designs.
  • Because the web guidelines were quite extensive, the learning curve for creating web pages and applications was very steep. Thus high costs.
  • The governance team was overwhelmed and most of the time busy with checking websites if they meet the standards they put in place. Due to this check, going live with a website was delayed, sometimes inexplicably for the developers and stakeholders
  • Special needs of some development teams were almost never addressed so this was ending up in frustration and either going rogue or doing an ok-ish work with the sole purpose of just delivering something

Then I said to one of the UX designers:
– Cool! We have now the chance to do things the right way. We build not only a mobile presence, but a responsive web presence for all the devices and we will build it like a toolkit that can be used by everyone.
– Nah! Building such a toolkit will be very tedious. Not technically, but politically, to get the buy-in of all the high levels involved.
– Then let’s do it at a smaller scale to show them the advantages.
– Yup, here you may have something, let’s do this for mobile.

Then I connected directly with the UX team and asked them to create a list of semantic components. I explained them the concept of semantic design(CSS), its advantages and the fact that it doesn’t add up any work – it’s just a paradigm shift. They were very open (maybe I infected them with my enthusiasm, maybe they wanted to try something new) and they agreed. After just a few iterations we had an entire box full of semantic components. At that time I wasn’t aware of atoms – molecules – organisms, even though we organized them incrementally too. But I think this naming convention is much clearer.

I just want to make a small parenthesis here – the biggest challenge here was finding good names and most of the iterations were related to it. We even applied this when naming colors. I believe if you cannot find a good, SEMANTIC name to stand the test of style, then it would not stand the test of time. Test of style means that the name will make sense even if you completely change its style.

Initially, the components were presented in desktop style, but this was no issue. Creating a mobile style was fairly easy and fast. Only then we jumped to development – and it was a breeze this time. We even used the same names in CSS and the fact that we could reuse styles and components made all that planning effort and mindset change worthwhile.

We ended up with a toolkit in just a few weeks. And this was the output of a team of just 4-5 developers, not fully dedicated to this project. Now, other front-end developers and external agencies were able now to develop mobile pages. But how was this better or faster? First of all they got rid of all the web guidelines, a huge book to read and, if you wanted to be proficient, memorize. Now they had a few templates, a handful of components and just a few pages of documentation. If they stick to those templates and those components, their pages will be compliant. And here is how you get a quite fast approval from the standards team. Also this toolkit had embedded all the quirks of mobile development (back then were much more than nowadays 🙂 ). They were able to test their mobile pages on desktop browsers and have the confidence that they will work on mobile devices too. We, as the development team, took care of all the inconsistencies between different devices and browsers. This also gave them the opportunity to focus on their task at hand – develop mini websites fast and not care about a plethora of devices and associated quirks.

We also had toolkit guidelines, but ours were much simpler: do not introduce any new CSS classes or any new custom tags – just stick to our templates and components. Would you think it’s too restrictive? Not a chance. We also advertised to all our users that we will create ourselves any new components that they might need, if the current ones are not sufficient. And we got a lot of requests. But most of the time, those requests were practically a misunderstanding of the naming convention – they were looking for synonym component. We ended up adding synonyms to the documentation. And to make it official, we always responded with a link to that documentation. Sometimes, creating a new component wasn’t actually necessary, but just tweaking and customizing an existing one. This way the UI toolkit became more powerful and the documentation more comprehensive with each request coming from our users. And the number of requests was decreasing, freeing us the time to extend and improve the toolkit.

For static pages we had templates and components developed in …, actually the name of the tool is not important, but the fact that the users were not starting with a blank page. And the learning curve was not steep anymore. We even had JSP templates and custom tags available for web application development. We also created a transcoder from desktop pages to mobile optimized pages using the same toolkit.

And now I will tell you about two cases demonstrating the power of this approach, cases that stuck into my mind.

Less than a year later, by the time our toolkit became the one stop shop for mobile pages development, the time for a new company-wide redesign came. Most of the components stood the test of time and they just needed a facelift. Probably the most eye-catching facelift was to move from a black background to a white one. I’ll tell you later why I revealed this detail.

After UX and visual designers hand us over the new specs, for us was mostly a matter of rewriting a new CSS. Which we did in less than 3 weeks. When we were ready to go live, the desktop team was simply amazed:
– What? We haven’t finished yet the homepage. But I guess you did it only for a few pages, so it’s not actually ready to go live yet.
– No, we did for ALL the pages.
– Including the transcoded pages?
– Yup!
– For all the countries and languages?
– Of course.
– But you could not go live, we’re not ready.
Then the frustration was on our side and of another kind. But they agreed a compromise – to change the style just on the homepage and all the subsequent pages to remain on the old style. Most probably hoping that this would buy them another 2-3 weeks at least. Next day we came up with this version – in the end was just a matter of conditionally including one CSS or the other.
– But we also want to do some A/B testing and release the new design gradually.
– Ok, no problem, we already have support for this. We just need to know the target users for A and B.
Finally they let us go live with the new version in full. Few weeks later, they managed to release the homepage, many months later approximately 80% of the pages got the new design. A/B testing never happened.

Next day, after we went live with the new design, one of the front-end developers came to my desk and told me:
– I was developing yesterday a mobile page. In the evening I saved it and shutdown my computer. This morning I came, opened it and it turned from black to white. But I swear that I didn’t do anything.
I started laughing and I assured him that it’s fine, that this is the new redesign communicated by the standards team just few weeks back. I have to admit that I was happy about this level of upgrade. But we also sent a communication stating that front-end developers don’t have to do anything to get the new version. They just have to use the same tools, guidelines and toolkit. The replies were almost instantaneously: “We want this for desktop too!”

This is how we got the buy-in for creating a new RESPONSIVE toolkit. But that’s another story.

Update Jan 24th, 2018 (thanks to Meghan)

There are just a few, but very valuable things that I took from the “Atomic Design” book. First of all this book formalizes the entire process and gives pretty good names to everything.

I was using the term of semantic design, but atomic suggests the idea of modularity. On the other hand, semantic clearly states the separation between content and style. We were using the term of components, and even though we were describing them incrementally, I think atoms-molecules-organism makes a much clearer separation and gives a better idea of magnitude.

Another nice idea is the clear separation between UX comps and visual design. If you do this, it will be an additional check that your design system is both modular and semantic.

Categories: Web

Paperless user manuals

August 12, 2016 1 comment

In the last years an idea is bugging me. You probably saw those user manuals that come with almost any product. Yeah, those thick ones that nobody reads them. Even better some are accompanied by a CD. Really? CD? Has a time machine been invented and are those manufacturers producing goods to ship them in the past?

Who’s using a CD anymore? Or should I ask, who’s owning a CD or DVD anymore? From thousands of laptops on the market I think you can find DVD drives in only few tens or hundreds of them. And when it comes to tablets, and many laptop owners switched to tablets, the ratio is clear x:0.

The funny part is when you get such a CD for setting up a laptop or even a DVD drive. Come on! Everyone now has an Internet connection and optic fiber is even de-facto standard in many countries. Yeah, I agree that there are some exceptions, but I would say that there are only exceptions. And if they’re not exceptions how would you call owning a CD/DVD drive?

So, why the manufacturers are so stubborn in printing and packing those? Is there any kind of law to force them? I think it should be one to force them not to print them. As an ecologist and a parent, I care about what happens to our trees and environment in general.

I would think that the best thing should be a small standard sticker with a QR code and a tiny URL printed on it. From that point on you’ll get a web page with a much better interaction. Starting guide, usage sections in a mobile optimized experience. Nowadays many people are owning a smartphone and this will be an improved experience. Not to mention that few months after you bought a product, most probably you lost the user manual. But not if you bookmarked the page or if you read/scan the sticker again.

Also imagine the costs of creating such user manuals and printing them. Costs that you pay as the end user.

Unfortunately, until a law will enforce high taxes on this kind of manuals and the total waste of resources, I don’t think the situation will change.

Categories: Ideas

PhoneGap setup

March 22, 2016 1 comment

It’s not the first time that I played with PhoneGap, but I haven’t done in quite some time. But I always liked the idea of creating a platform independent application. And if that application can be tested directly in the web browser, even better.

Creating a user interface in a descriptive language like HTML is easier as opposed to a programmatic approach where you have to write code to create your visual components. Nowadays, most frameworks also offer the descriptive approach, usually through XML, but learning a new language when you already know another one more powerful is not that appealing. HTML is also augmented by CSS that easily offers a high degree of customization and JavaScript that comes along with functionality. And all together create a platform-independent framework with a high degree of customization and a clear separation of layers.

So it’s clear why I like the idea of PhoneGap right from the start. Now, let’s set it up.

To develop a Phonegap application you don’t need to many things. The best thing will be to install nodejs and then phonegap: npm install -g phonegap.

Then you can create a sample application with phonegap create my-app, command which will create all the necessary files and subfolders under my-app folder.

Now it comes the testing part and for this you need to install PhoneGap Desktop. As I said, it’s nice that you can test your app directly in your browser by visiting the link displayed at the bottom of Phonegap Desktop window, e.g. (hint: it doesn’t work with localhost or And if you install PhoneGap Developer App you can easily test on your mobile too without the hassle of installing the application itself every time you make a change – changes will be automatically deployed (reloaded).

When you’re done it comes the fun part – actually building the application. Let’s do this for Android.

First you need to install JDK (I tested with version 8) and Android Studio.

And then you need to setup some enviroment variables

  • JAVA_HOME – this must be set to the folder where your JDK, not JRE, is installed.
  • ANDROID_HOME – this must be set to the folder where your Android environment is installed.
  • add to PATH the following %ANDROID_HOME%\tools;%ANDROID_HOME%\platform-tools;%JAVA_HOME%\bin in Windows or ${ANDROID_HOME}\tools;${ANDROID_HOME}\platform-tools;${JAVA_HOME}\bin in Linux

If the above are not correctly set or the PATH is invalid (like it has an extra quote(“) or semicolon(;)) you can run into errors like

  • Error: Failed to run "java -version", make sure that you have a JDK installed. You can get it from: Your JAVA_HOME is invalid: /usr/lib64/jvm/java-1.8.0-openjdk-1.8.0
  • Error: Android SDK not found. Make sure that it is installed. If it is not at the default location, set the ANDROID_HOME environment variable.

I also had to run

phonegap platforms remove android
phonegap platforms add android@4.1.1

By default I had installed Android 5.1.1, but I was getting the error Error: Android SDK not found. Make sure that it is installed. If it is not at the default location, set the ANDROID_HOME environment variable. You can check what platforms you have installed by running the command phonegap platforms list.

Make sure that you have all the Android tools and SDKs installed by running android on the command line and select all the ones not installed and install them.

Finally, you can build the application by running the following command in your project folder:

phonegap build android

and if everything goes well you’ll find your apk at <your-project-dir>/platforms/android/build/outputs/apk.

Categories: Software, Web

Banking apps

October 8, 2015 1 comment

I was thinking few days ago what I want from a banking application. So I decided to write an article from the users point of view and their expectations when it comes to banking applications. So this one is for everyone, not only technical people :).

We have to admit that banks are huge dinosaurs, especially when it comes to their web site and the web application offered to users. And it shouldn’t be the case. They’re making huge piles of money out of thin air, I’m trusting them with my money, at least they should give me some good tools.

At first, there are some features that every banking app should incorporate. You should be able to access any of your accounts, current, savings or loans, have an AGGREGATED status of all these, create new ones in any currency or delete existing, be able to easily transfer between them and any other external accounts. And if I want to transfer money, I don’t care if it’s internal, national or international, I’ll just give the recipient, the IBAN (or any account identification string), the amount and that’s it. Show me the transaction fee (and the exchange rate if that’s the case) and if I accept it, just do it. And the same for scheduled transfers in the future or recurrent (weekly, monthly, yearly) ones. If the transaction fee will change on any of these in the future, deactivate them and just let me know so I can reactivate them.

Direct debits are a must. I like a bank where I have to spend less of my time in the bank. Offline or online. So I should be able to easily setup direct debits like any other transfer – specify a recipient, IBAN and a limited amount per week, month or year. Also any company should be able to easily request debits from my account, through some kind of API. And then I could do this not only for my gas and electric bill, but also for my internet provider or gym subscription. And I can cancel them at anytime or just set an expiration date.

Another must is to be able to associate my card with any of my accounts. Imagine that I go abroad and then I would like to spend money from a foreign currency account. Be able to switch and switch back instantly, when traveling is not uncommon, shouldn’t be impossible or a hassle.

A mobile app with which I can pay, without any credit/debit card is also something that should stay in the ordinary area, not cutting edge tech.

Categories: Web

IMW 2015 or How to make an (un)interesting presentation

October 8, 2015 Leave a comment

I’ve just attended for the fourth (I guess) year in a row the biggest Romanian conference on Internet and mobile. Bigger and bigger each year, I always had a love-hate relationship with Internet and Mobile World. At the beginning I say that it’s not interesting and it’s probably the last year I will attend, but then a few presentations and exhibitors change my mind. This year was no different.

I read lately a Dale Carnegie book and he was saying that if you want to sell something you shouldn’t talk about you and your product, but about the customer need. Nothing but true. In this idea, I saw some really boring presentations. First of all, some of them were given by managers or CEOs. I don’t have something against CEOs, but let’s get one thing straight. They got there, not because they know how to captivate the audience, but because they know how to run a business. Unfortunately, some managers, as soon as they become managers they also instantly gain access to the entire world knowledge and become proficient in every skill known to mankind. No! Again, they became managers because they recognized talent and they knew how to acquire it into their business. They should do the same here – get someone else to give a really interesting and captivating presentation.

I understand that you want to brag about what cool things are you creating and how good your company is doing. But, me in the audience, I’m also doing cool things. And if I’m not, I probably hate you. Or I simply don’t care about it. You don’t care either about my needs or interests, why shouldn’t this be a fair relationship?

I can easily get a financial report, projection or a product/service portfolio from any company website. So don’t come and present long and boring slides about any of these topics. But if you talk to me about any of my needs and interests and then slowly you introduce me on how a new technology can help me you can arouse my curiosity. And now that you got me, you can also happen to mention about your cool and innovative product that is the incarnation of that technology. And don’t make any false claims, either on the product or your expertise on the field. If I realized it and I will surely do, if I’m interested on the subject, taking into account the multiple channels of gathering information nowadays, then I will drop your presentation like a rotten apple. As a shadow of doubt, at the very best, felt on its entirety.

High level managers of large corporations, which they haven’t evolve from startups, usually tend to have a results oriented presentation. Whoa! I did not pay a ticket to came and make your business here. I got it – your company is the greatest and I think we established that already. Developers on the other hand tend to lose themselves in small technical details – I even saw slides of code in front of a general audience. Whoa again! I did not came here to work. I came to get fresh ideas, new contacts, to see where the market (not a company or product) sits as a whole. So you better arouse me, show me something cool, interesting, spiced with clear use cases. Walk in my shoes and show me a road to your product, but an interesting road.

Few presentations were like this, but those that were, they felt in the interesting area, even though I knew some of their content. And for God’s sake, speak loudly and in no way in a monotone voice. And if you don’t master English better ask your audience if you can do your presentation in your mother tongue.

I also need to highlight an interesting idea materialized in a cool product by a Romanian company. Altom built a robot that by using two motors and a camera was able to automate testing on real(!) devices like cameras and tablets. Using the camera it was recognizing images and then it was able to move on XY axis and tap on the device. And you could write your testing scenarios directly from your IDE, just like a Selenium test case! I would vote them as the most innovative product on this fair.

In general, the market trends seemed to be streaming and video on mobile and Internet of Things. Robotics seemed to be catching on, but this was always the case with this field for years and years, having spikes, but never really becoming mainstream. Indeed, in different forms and shapes they already entered our life, but not at the expectancy that SF fans always hoped. Cloud moved more to a mainstream level, which is kind of the case. I would definitely not think today of creating my own infrastructure, no matter the size of the project or company.

So, all in all, IMW 2015 wasn’t a waste of time with some, not majority, interesting presentations and exhibitors. Will I go next year? Will see :).

Categories: Technology

Shift an array in O(n) in place

April 7, 2015 Leave a comment

Below I copied the code for shifting an array in place.

void shift(Object[] array, int startIndexInclusive, int endIndexExclusive, int offset) {
if (array == null) {
if (startIndexInclusive >= array.length - 1 || endIndexExclusive <= 0) {
if (startIndexInclusive = array.length) {
    endIndexExclusive = array.length;
int n = endIndexExclusive - startIndexInclusive;
if (n  0) {
    int n_offset = n - offset;
    if (offset > n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n - n_offset,  n_offset);
        n = offset;
        offset -= n_offset;
    } else if (offset < n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset,  offset);
        startIndexInclusive += offset;
        n = n_offset;
    } else {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset, offset);

The swap(array, index1, index2, len) method swaps in the given array the elements from [index1, index1 + len) with the ones [index2, index2 + len).

Even though it may seem complicated at first, the idea is pretty simple. If the offset is half of array length, or in other words if offset == (n – offset), where n is the total number of elements to be shifted, then the shift is equivalent of swapping the two halves of the array.
For the first two cases we swapped the portions at the ends and one of them will come in place and we will continue the iteration for the rest as shown in the below figure.

shift algorithm

Space complexity is clearly O(1), but what about time complexity. I’m gonna prove that it is O(n).

Let Sh(n, k) be the problem of shifting k positions in an array of size n and Sw(k) be the problem of swapping k elements in an array. For the sake of simplicity I left out the start and end indices.

It is obvious that O(Sw(k)) = O(k).

It is also obvious that O(Sh(1, k)) = O(1), with k < 1. Also O(Sh(x, 0)) = O(1).

Now let's assume that O(Sh(n, k)) = O(n), whatever n, with k < n. I'll try to prove that O(Sh(n + 1, k')) = O(n), with k' < n + 1.

Analyzing the algorithm we have

O (Sh(n + 1, k’)) = max(
O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)), if 2k’ > n + 1
O(Sw(k’)) + O(Sh(n + 1 – k’, k’)), if 2k’ < n + 1
O(Sw(k')), if 2k' == n + 1

  • First case 2k’ > n + 1

    O(Sh(n + 1, k’)) = O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)) = O(n) + O(Sh(k’, 2k’ – n – 1))

    Because (k’ < n +1) ⇒ (k' ≤ n) and (2k' ≤ 2n) ⇒ (2k' – n – 1 ≤ n – 1) ⇒ (2k' – n – 1 < n), then O (Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Second case 2k’ < n + 1

    O(Sh(n + 1, k')) = O(Sw(k')) + O(Sh(n + 1 – k', k')) = O(k') + O(Sh(n + 1 – k', k')) = O(n) + O(Sh(n + 1 – k', k'))

    Because (0 < k') ⇒ (n + 1 – k' < n + 1) ⇒ (n + 1 – k' ≤ n) and (k' < (n + 1)/2) ⇒ (k' ≤ n/2) ⇒ (k' < n), then O(Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Third case 2k’ == n + 1

    O(Sh(n + 1, k’)) = O(Sw(k’)) = O(n)

So O(Sh(n + 1, k’)) = O(n). As a consequence O(Sh(n, k)) = O(n), whatever n, with k < n.

The code will become part of commons-lang, ArrayUtils class, as of version 3.5.

Categories: Web

I had it with Maven

April 6, 2015 3 comments

Initially I was a big Maven fan. It was the best build tool. When coding in C I was using make and after switching to Java, naturally Ant was next.

But Maven was so much better. It had a few things that you cannot do anything but love them. First of all, dependency management. To get rid of downloading the jars, include them in a lib folder, and even the bigger pain of updating them … wow … to get rid of all of these was a breeze.

Maven had also a standard build lifecycle and you can download a project and start a build without knowing anything about it. You could have done this in Ant, but there projects should have followed a convention, which wasn’t always the case. In all honesty, it wasn’t even existing one :), at least a written formal one.

And then Maven came with a standard folder structure. If you pickup a new project it’s clearly easier to understand it and find what you’re looking for.

And … that’s about it. I would like to say that the fact that uses XML was also a good thing. XML is powerful because it can be easily understood by both humans and computers (read programs). But no other tool, except Maven, was interested in understanding its POM. And if for describing and formalizing something, like a workflow, XML is great, using it for imperative tasks – not so. Ant was doing this and going through an Ant build wasn’t the easiest task of all.

Maven was also known for its verbosity. If you go to and take any package in there you’ll clearly see the most verbose one:


as compared to Gradle for example



And that’s for adding just one dependency. If you have 30 or more in your project, which for a web application is not uncommon, you end up with a looot of code …

And then it comes the customization. It is very good that Maven comes with a standardized lifecycle, but if it’s not enough (and usually it isn’t) it’s very hard to customize it. To do something really useful you’ll need to write a plugin. I know that there is the exec plugin, but it has some serious drawbacks and the most important one is that it cannot do incremental builds. You can simply run a command, but you cannot do it if only some destination files are outdated compared to their corresponding sources.

So, I needed something else. I looked a little bit over some existing build tools and none of them seemed very appealing, but I ended up switching to Gradle. Two reasons: Spring is using it (I’m a big fan of many Spring projects) and I wanted to also get acquainted with Groovy.

While switching a web application project, which took me 3 days, I ended up with a 10kb build file instead of 36Kb and many added features. I was able to build my CSS and JS using Compass/Sass and browserify and more importantly incrementally (but as a whole).

I was also able to better customize my generated Eclipse project, including specify derived folders. As a side note, for improved Gradle support in Eclipse you need to install Gradle IDE from update site and miminum required version is 3.6.3 – see here why. You may need to uncheck Contact all update sites during install to find required software if you get an installation error.

Gradle is probably not the dream build tool, it has a very loose syntax, it combines descriptive properties with imperative task reducing readability, but it’s less verbose and much more flexible than Maven. Probably with some coding standards applied on top, it could become a good choice.

Categories: Software Tags: , ,