Shift an array in O(n) in place

April 7, 2015 Leave a comment

Below I copied the code for shifting an array in place.

void shift(Object[] array, int startIndexInclusive, int endIndexExclusive, int offset) {
if (array == null) {
    return;
}
if (startIndexInclusive >= array.length - 1 || endIndexExclusive <= 0) {
    return;
}
if (startIndexInclusive = array.length) {
    endIndexExclusive = array.length;
}        
int n = endIndexExclusive - startIndexInclusive;
if (n  0) {
    int n_offset = n - offset;
    
    if (offset > n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n - n_offset,  n_offset);
        n = offset;
        offset -= n_offset;
    } else if (offset < n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset,  offset);
        startIndexInclusive += offset;
        n = n_offset;
    } else {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset, offset);
        break;
    }
}

The swap(array, index1, index2, len) method swaps in the given array the elements from [index1, index1 + len) with the ones [index2, index2 + len).

Even though it may seem complicated at first, the idea is pretty simple. If the offset is half of array length, or in other words if offset == (n – offset), where n is the total number of elements to be shifted, then the shift is equivalent of swapping the two halves of the array.
For the first two cases we swapped the portions at the ends and one of them will come in place and we will continue the iteration for the rest as shown in the below figure.

shift algorithm

Space complexity is clearly O(1), but what about time complexity. I’m gonna prove that it is O(n).

Let Sh(n, k) be the problem of shifting k positions in an array of size n and Sw(k) be the problem of swapping k elements in an array. For the sake of simplicity I left out the start and end indices.

It is obvious that O(Sw(k)) = O(k).

It is also obvious that O(Sh(1, k)) = O(1), with k < 1. Also O(Sh(x, 0)) = O(1).

Now let's assume that O(Sh(n, k)) = O(n), whatever n, with k < n. I'll try to prove that O(Sh(n + 1, k')) = O(n), with k' < n + 1.

Analyzing the algorithm we have

O (Sh(n + 1, k’)) = max(
O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)), if 2k’ > n + 1
O(Sw(k’)) + O(Sh(n + 1 – k’, k’)), if 2k’ < n + 1
O(Sw(k')), if 2k' == n + 1
)

  • First case 2k’ > n + 1

    O(Sh(n + 1, k’)) = O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)) = O(n) + O(Sh(k’, 2k’ – n – 1))

    Because (k’ < n +1) ⇒ (k' ≤ n) and (2k' ≤ 2n) ⇒ (2k' – n – 1 ≤ n – 1) ⇒ (2k' – n – 1 < n), then O (Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Second case 2k’ < n + 1

    O(Sh(n + 1, k')) = O(Sw(k')) + O(Sh(n + 1 – k', k')) = O(k') + O(Sh(n + 1 – k', k')) = O(n) + O(Sh(n + 1 – k', k'))

    Because (0 < k') ⇒ (n + 1 – k' < n + 1) ⇒ (n + 1 – k' ≤ n) and (k' < (n + 1)/2) ⇒ (k' ≤ n/2) ⇒ (k' < n), then O(Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Third case 2k’ == n + 1

    O(Sh(n + 1, k’)) = O(Sw(k’)) = O(n)

So O(Sh(n + 1, k’)) = O(n). As a consequence O(Sh(n, k)) = O(n), whatever n, with k < n.

The code will become part of commons-lang, ArrayUtils class, as of version 3.5.

Categories: Web

I had it with Maven

April 6, 2015 2 comments

Initially I was a big Maven fan. It was the best build tool. When coding in C I was using make and after switching to Java, naturally Ant was next.

But Maven was so much better. It had a few things that you cannot do anything but love them. First of all, dependency management. To get rid of downloading the jars, include them in a lib folder, and even the bigger pain of updating them … wow … to get rid of all of these was a breeze.

Maven had also a standard build lifecycle and you can download a project and start a build without knowing anything about it. You could have done this in Ant, but there projects should have followed a convention, which wasn’t always the case. In all honesty, it wasn’t even existing one :), at least a written formal one.

And then Maven came with a standard folder structure. If you pickup a new project it’s clearly easier to understand it and find what you’re looking for.

And … that’s about it. I would like to say that the fact that uses XML was also a good thing. XML is powerful because it can be easily understood by both humans and computers (read programs). But no other tool, except Maven, was interested in understanding its POM. And if for describing and formalizing something, like a workflow, XML is great, using it for imperative tasks – not so. Ant was doing this and going through an Ant build wasn’t the easiest task of all.

Maven was also known for its verbosity. If you go to mvnrepository.com and take any package in there you’ll clearly see the most verbose one:

<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf</artifactId>
    <version>2.1.4.RELEASE</version>
</dependency>

as compared to Gradle for example

'org.thymeleaf:thymeleaf:2.1.4.RELEASE'

.

And that’s for adding just one dependency. If you have 30 or more in your project, which for a web application is not uncommon, you end up with a looot of code …

And then it comes the customization. It is very good that Maven comes with a standardized lifecycle, but if it’s not enough (and usually it isn’t) it’s very hard to customize it. To do something really useful you’ll need to write a plugin. I know that there is the exec plugin, but it has some serious drawbacks and the most important one is that it cannot do incremental builds. You can simply run a command, but you cannot do it if only some destination files are outdated compared to their corresponding sources.

So, I needed something else. I looked a little bit over some existing build tools and none of them seemed very appealing, but I ended up switching to Gradle. Two reasons: Spring is using it (I’m a big fan of many Spring projects) and I wanted to also get acquainted with Groovy.

While switching a web application project, which took me 3 days, I ended up with a 10kb build file instead of 36Kb and many added features. I was able to build my CSS and JS using Compass/Sass and browserify and more importantly incrementally (but as a whole).

I was also able to better customize my generated Eclipse project, including specify derived folders. As a side note, for improved Gradle support in Eclipse you need to install Gradle IDE from update site http://dist.springsource.com/release/TOOLS/update/e4.4/ and miminum required version is 3.6.3 – see here why. You may need to uncheck Contact all update sites during install to find required software if you get an installation error.

Gradle is probably not the dream build tool, it has a very loose syntax, it combines descriptive properties with imperative task reducing readability, but it’s less verbose and much more flexible than Maven. Probably with some coding standards applied on top, it could become a good choice.

Categories: Software Tags: , ,

Smart laser tag

February 8, 2015 Leave a comment

Laser tag is still a niche market, but it caught up a little bit in the last years. There are few things before becoming mainstream.

Even though you can play it virtually anywhere, either in the office or in the woods, it is mostly spread in the arenas. And I think the problem is the equipment. It is bulky, hard to set up and most of the times needs a central computer to work at its full potential.

Of course, there are outdoor versions too. They are built around a microcontroller with an uploaded software. Here the problem is that we’re having limited computing capabilities and as a consequence limited features too. Lately, on Kickstarter, there are projects, where the guns are built using an Arduino-like board and sometimes even a touch screen. Extended capabilities and features.

Another issue are the sensors. A laser tag system is practically made of a gun and a set of sensors. The gun using an infrared LED emits a ray which, if it is intercepted by a sensor, it is considered a hit. When the “life” points are completely drained, the gun works no more. Sensors are usually small and mounted on vests, headbands or guns. Why I say that this is an issue? First of all, if it they’re mounted on the gun, this is not too realistic, right? And also the sensors are directly connected to a gun. Again not too realistic – you cannot have two guns, you cannot pick up a killed enemy gun. So it will be nice if you can disconnect the gun from the sensors.

And when it comes to guns you have a limited amount of models. As the systems are incompatible, building many gun models is not feasible.

Another thing is the price of such a system. The cheapest will usually go over $100 a piece.

My idea is that a very realistic, loaded with features and cheap laser tag system can be built. To address the issue of the microcontroller (and its limited computing capability) and the price, the idea of using a smartphone instead is obvious. Nowadays, almost everyone has a smartphone and this will greatly reduce the final cost of the system (we will not count the smartphone price, as you already have it).

Then we can easily address the gun-sensor connection. A gun will have a Bluetooth module, a microcontroller (much smaller) and an infrared LED. And a battery, lens and two buttons: trigger and connect. The connect button allows to connect a gun to a Practically a gun will transmit through the LED whatever it receives from the smartphone through the Bluetooth receiver. There is still a microcontroller, but it is “thinner”, smaller and cheaper.

This way a gun can be handled by more than one player, of course, not at the same time. And the gun can be built as a module that can be mounted as a scope on any existing guns, like an airsoft gun. So you also have a wide selection and even blow-back guns.

And the sensors, mounted on the vest, headband or both, will connect directly to the smartphone and will be the ones to control the “life” points. It can connect to the smartphone through Bluetooth or USB, but then the smartphone must support USB OTG (which is becoming more and more frequent). And this will be solving another issue: battery. I know that the smartphone usually has a smaller battery life, but external power banks are now cheaper and common and many will most probably already have one. For convenience you can even have the smartphone on your favorite armband. So the sensor vest, maybe connected with a headband, will be just a network of infrared receiver with different circuits if you want different weight of hits. This way the system allows head shots for the most skilful ones. The vest can be connected only to one smartphone and if the “life” points are drained, then the app will declare the player dead who will be able to transmit to the gun.

Now going back to the laser tag system as a smartphone app. What new features does this bring?

Practically there’s no smartphone without GPS. Now imagine that you can have a map on which you can see the position of your troops (and even their “life” points status), you can launch virtual airstrikes and even set up virtual proximity mines (with friendly fire or not) . It does bring new dimensions to the game, isn’t it? And if the smartphone is actually a Google Glass-like device, augmented reality will open new opportunities. You can have this on a smartphone too, but, if not mounted somehow on the gun, it is harder to handle to actually view the augmented reality..

New games can be easily invented, like capture the hill or capture the flag (virtual flags). If you pass a certain area, you capture a flag and a notification can be sent to everyone. Even games like escorting prisoners are easy within reach.

Customization now becomes the sweetest part of such a system. Beside the “life” points, you can have armour. Or doctors. Or prisoners. Or you can even do virtual drop-off that can be picked up by players by just going in the exact area. They can contain drugs (life points), armour, shells.

And to have it all, if you build such a system in a plug-in-able way, then anyone can develop new guns, games or any other feature and share them with the entire community. Imagine that you use PhoneGap and everyone can develop a plugin using JavaScript. A web laser combat game – just wow!

Now to talk a little bit about the price, production price I mean.web laser

Laser tag is still a niche market, but it caught up a little bit in the last years. There are few things before becoming mainstream.

Even though you can play it virtually anywhere, either in the office or in the woods, it is mostly spread in the arenas. And I think the problem is the equipment. It is bulky, hard to set up and most of the times needs a central computer to work at its full potential.

Of course, there are outdoor versions too. They are built around a microcontroller with an uploaded software. Here the problem is that we’re having limited computing capabilities and as a consequence limited features too. Lately, on Kickstarter, there are projects, where the guns are built using an Arduino-like board and sometimes even a touch screen. Extended capabilities and features.

Another issue are the sensors. A laser tag system is practically made of a gun and a set of sensors. The gun using an infrared LED emits a ray which, if it is intercepted by a sensor, it is considered a hit. When the “life” points are completely drained, the gun works no more. Sensors are usually small and mounted on vests, headbands or guns. Why I say that this is an issue? First of all, if it they’re mounted on the gun, this is not too realistic, right? And also the sensors are directly connected to a gun. Again not too realistic – you cannot have two guns, you cannot pick up a killed enemy gun. So it will be nice if you can disconnect the gun from the sensors.

And when it comes to guns you have a limited amount of models. As the systems are incompatible, building many gun models is not feasible.

Another thing is the price of such a system. The cheapest will usually go over $100 a piece.

My idea is that a very realistic, loaded with features and cheap laser tag system can be built. To address the issue of the microcontroller (and its limited computing capability) and the price, the idea of using a smartphone instead is obvious. Nowadays, almost everyone has a smartphone and this will greatly reduce the final cost of the system (we will not count the smartphone price, as you already have it).

Then we can easily address the gun-sensor connection. A gun will have a Bluetooth module, a microcontroller (much smaller) and an infrared LED. And a battery, lens and two buttons: trigger and connect. The connect button allows to connect a gun to a Practically a gun will transmit through the LED whatever it receives from the smartphone through the Bluetooth receiver. There is still a microcontroller, but it is “thinner”, smaller and cheaper.

This way a gun can be handled by more than one player, of course, not at the same time. And the gun can be built as a module that can be mounted as a scope on any existing guns, like an airsoft gun. So you also have a wide selection and even blow-back guns.

And the sensors, mounted on the vest, headband or both, will connect directly to the smartphone and will be the ones to control the “life” points. It can connect to the smartphone through Bluetooth or USB, but then the smartphone must support USB OTG (which is becoming more and more frequent). And this will be solving another issue: battery. I know that the smartphone usually has a smaller battery life, but external power banks are now cheaper and common and many will most probably already have one. For convenience you can even have the smartphone on your favorite armband. So the sensor vest, maybe connected with a headband, will be just a network of infrared receiver with different circuits if you want different weight of hits. This way the system allows head shots for the most skilful ones. The vest can be connected only to one smartphone and if the “life” points are drained, then the app will declare the player dead who will be able to transmit to the gun.

Now going back to the laser tag system as a smartphone app. What new features does this bring?

Practically there’s no smartphone without GPS. Now imagine that you can have a map on which you can see the position of your troops (and even their “life” points status), you can launch virtual airstrikes and even set up virtual proximity mines (with friendly fire or not) . It does bring new dimensions to the game, isn’t it? And if the smartphone is actually a Google Glass-like device, augmented reality will open new opportunities. You can have this on a smartphone too, but, if not mounted somehow on the gun, it is harder to handle to actually view the augmented reality..

New games can be easily invented, like capture the hill or capture the flag (virtual flags). If you pass a certain area, you capture a flag and a notification can be sent to everyone. Even games like escorting prisoners are easy within reach.

Customization now becomes the sweetest part of such a system. Beside the “life” points, you can have armour. Or doctors. Or prisoners. Or you can even do virtual drop-off that can be picked up by players by just going in the exact area. They can contain drugs (life points), armour, shells.

And to have it all, if you build such a system in a plug-in-able way, then anyone can develop new guns, games or any other feature and share them with the entire community. Imagine that you use PhoneGap and everyone can develop a plugin using JavaScript. A web laser combat game – just wow!

Now to talk a little bit about the price, production price I mean. The gun module has a Bluetooth chip, a microchip, an infrared LED, a lens, a battery enclosure and two buttons. The parts are all probably under $40. The sensor vest and headband have infrared receivers, microchip and USB connector. All together under $40. So you have a full featured laser combat system, easy extendable through software plug-ins for under $100. But this without counting the smartphone, power bank and software. But if the latter would be a community open-source effort … Maybe soon …

Categories: Ideas Tags: ,

WebRTC saga

February 5, 2015 Leave a comment

Recently I started using WebRTC. Cool technology. You can build a web chat in just tens of lines of code, you can take pictures with your webcam and using canvas you can manipulate them. And these are just a few examples of what WebRTC can do for you and your web app.

Tens of lines of code indeed. But not easy lines at all. Even though WebRTC will enable peer-to-peer communication, you still need some server components. These server components are involved in the session initiation.

First there is a ICE/STUN/TURN server that it’s used for a client to discover its public IP address if it is located behind a NAT. Depending on your requirements could not be necessary to build/deploy your own server, but use an already public (and free) existing one – here‘s a list. You can also deploy an open source one like Stuntman.

Then it comes the signaling part, used by two clients to negotiate and start a WebRTC session. There is no standard here and you have a few options.

You can use an XMPP server with a Jingle extension. Developing your own XMPP server (or component) just for this will be an overkill, so you should definitely consider using an existing one. But even installing, configuring and integrating an existing one could be too much if you don’t use it for anything else. But if you already have one in your infrastructure, you could hook up into it. On the client side, you can use an existing XMPP JavaScript library.

You can also use SIP, a protocol much more encountered for VoIP. Like XMPP, SIP it is too much to be used just for WebRTC signaling. If you already have it in your infrastructure, then could be easy to use it. As for SIP, in Java you have two development options.

The high level one is SIP servlets with its most notable implementation, Mobicents. If you develop a non-GPL application, Mobicents commercial license could be quite expensive just for WebRTC signaling.Mobicents is developed on top of Tomcat or JBoss, so, depending on your environment, this could also be a drawback.

The lower level option for SIP is JAIN-SIP, on which even Mobicents is developed on. It is completely free. There you have different protocol options, like TCP, UDP or websockets. And for WebRTC the latter will be more appropriate.

With both options, you’ll still have to develop the server logic and SIP is a pretty cumbersome protocol so prepare yourself for a few headaches.

If you decide for SIP, on the client side, things could be a little bit brighter. There are already JavaScript SIP signaling solutions that you can easily integrate into your web applications. I looked into JsSIP and SIPJS and I ended up using the latter. SIPJS is actually forked from JsSIP, but it encapsulates the intricacies of the protocol better, which makes it a little bit easier to integrate.

Another option is to develop your own signaling protocol using something like websockets. Then you have to develop from scratch both the client and the server side part. Developing the server side part, in my opinion, will be easier than the previous options. You have to practically develop a messaging system, that forwards messages between users, without caring too much about the content. @ServerEndpoint abstracts development in an elegant manner and you can even use your existing authentication system as you can bind a websocket session with the HTTP one.

If you’re worried about websockets browser support, don’t. Wherever WebRTC is supported, Web Sockets are too.

In the client side, things will be a little bit more complicated, as you will have to develop the entire message flow, the server will just forward your messages to the appropriate peer. This entire flow should be asynchronous, something like a request-response paradigm.

The development difficulty arises here mainly from the different behavior on various browsers. And by various, I mean Firefox and Chrome, the main ones supporting WebRTC.

When it comes to WebRTC you don’t have to code anything for actually streaming, but only for initiating the session. Theoretically, WebRTC session initiation process is as follows. Let’s suppose A wants to talk B. And for the sake of simplicity we will use A signals B with the meaning that A sends a message to B through the signaling channel.
1. A will create and initialize an RTCPeerConnection. Few things involved here: a list of addresses of the ICE/TURN/STUN servers, a local stream to share with B.
The list of ICE servers is passed in the constructor, the local stream is obtained through navigator.getUserMedia() and added afterwards.
2. Using that connection A creates a SDP(session description protocol) offer.
3. A signals B the offer
3. When B receives the SDP offer from A (or even before) will create the RTCPeerConnection like in step 1.
4. B will set the remote session description like RTCPeerConnection.setRemoteDescription(new RTCSessionDescription(SDPOfferFromA)), where SDPOfferFromA is the JSON object received through the signaling channel.
5. B will create a SDP answer.
6. B signals A the answer
7. When A receives the answer, sets the remote description like in step 4, using SDPAnswerFromB.
8. Asynchronously, when a new ICE candidate is discovered and presented through the onicecandidate event of RTCPeerConnection, A (or B) should signal it to B (or A). When the other party receives it, B (or A) will add it using RTCPeerConnection.addIceCandidate(new RTCIceCandidate(candidateDescriptionReceived)).

Again, theoretically. In practice …

First of all, because WebRTC is not final, but in draft state, all the types are prefixed, like mozRTCPeerConnection (in Firefox) or webkitRTCPeerConnection (in Chrome and Opera). But this can be easily fixed.

window.RTCPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;

.

Then it comes the ICE candidates part. In Firefox the ICE candidates must be gathered before creating an SDP offer or answer. So before creating an offer/answer, check RTCPeerConnection.iceGatheringState and do it only if it is "complete".

There are also few inconsistencies when it comes to the media constraints passed to navigator.getUserMedia().

In the end, I can conclude that WebRTC is really cool and you can build media application in matter of tens or hundreds of lines of code. I guess it could have been designed to be easier to use. Even though there are a lot of opinions stating that the signaling protocol is better left out of the standard, I still think that it will be better if an optional one is included. Or at least a minimal easy to implement API.

Categories: Web Tags: ,

Choosing an XMPP server

January 28, 2015 1 comment

I was working lately with WebRTC. One of the biggest issues there is the signaling part.

As an option you can choose XMPP with its Jingle extension. So, naturally I was looking into a few XMPP servers. What are the requirements that I was chasing for these?

My architecture is a Java based, so I was looking into solutions based on this technology. I know that I can integrate different technologies through things like LDAP or web services, but … If I needed something custom, it would have been much easier to develop something in Java. So as a nice to have, the final solution should be extensible through some kind of plugin system. Another requirement was to be open source or at least affordable.

Taking these into account I narrowed down the list to OpenFire, Tigase, Apache Vysper and Jerry Messenger.

Apache Vysper aims to be a modular, full featured XMPP (Jabber) server. Unfortunately, it is not out-of-the-box to feature an easy installation and integration procedure. It also seems to be in a beta stage, not production ready.

Jerry Messenger has an embedded Jetty server, it can be easily configurable and it features a plugin-able system.

OpenFire seemed the most complete solution in its field. It has a very rich web admin interface, a plugin system and extensive documentation. As a bonus, it is just a web application so you can install it in your favorite web application server along with your other web applications.

Tigase claims to be the most scalable XMPP server supporting hundreds of thousands of concurrent users. It is configurable, standalone and plugin-able. Unfortunately the documentation is not as extensive.

Clearly, my preference went towards OpenFire and Tigase. But I ended up not using XMPP at all for WebRTC signaling. Why? All about it in the next article.

Categories: Web

Building enterprise web sites

August 26, 2013 Leave a comment

Responsive design is one of the latest and hottest topics in web design and development. Creating a responsive site, a site that offers a decent to good user experience, a site that follows accessibility guidelines it’s not easy, but it’s not that hard after all. It could take few iterations, but you’ll get there. But doing it at an enterprise level, that’s a whole different game.
First of all, what’s enterprise bringing to the table that hardens things? I’ll clarify just to know where we stand and to clearly define the requirements.

To start, enterprise usually means big. So we have volume. In terms of pages, visitors, infrastructure, human resources. Let’s see how all they’re influencing.

Lots of pages. This is probably one of the most important aspects. If you’ll develop each page individually, on its own, then you’ll waste money and time. And by the time you develop the last page, the first one becomes obsolete and needs to be redone. Of course, I assumed that you’ll want the same user experience (look and feel) on your entire website, which is the smart thing to do anyway and I won’t enter here into details why. So, coming back, there is clearly the need to reuse code in order to reduce development and maintainance effort and easily update all pages at once.

Lots of visitors. And this brings with it the need for performance. At every level: network and server infrastructure, code and process. Infrastructure is the foundation of your site and it’s also the main reason for stability (or instability) or, in other words, uptime (or downtime). If infrastructure is alright, then performance comes down to code. And on the web, this translates into server side execution time, number of requests, response size and client side execution time.

Big infrastructure. When talking about huge infrastructure, we will definitely see a heterogenous environment: different server-side technologies and different ages. So we will have outdated environment, outdated code, we will need to deploy to multiple environments simultaneously.

Lots of human resources. As many people will be involved in the development, there should be clear process and extensive documentation. Clear process will reduce the mistakes and inconsistency, while increasing development efficiency. Extensive documentation will decrease the learning curve and the need of direct support from developers. Also you should take into account that people with different backgrounds and skills will all work together. If we’re able to make development easier we will be able to reduce costs, either by reducing the need for high level skills or reducing the development time.

Now that we’ve seen what an enterprise site means and what are the basic requirements, I will just go through a list of good practices. The list of requirements could actually be bigger than the ones above, depending on your business needs. Also the following best practices are focusing only on client side development.

Architecture: Use a modular architecture

Modularize as much as possible. Besides a core framework, everything should be described and implemented as a component. Components will be assembled together to create pages, microsites or web applications. This will lead to many advantages. Different teams can implement different components. The user experience can be changed in its entirity or partially, but the most important aspect is that it can be done globally. Updating a component will update all the pages using it.
Beside the component developer, we identify the role component user, which is the person that will assemble a page, microsite or web application from the existing components.

Architecture: Use a governance process

Even though the component development process can be easily descentralized, ensuring consistency and avoiding development overlaps should be done centrally through some kind of governance process. Representatives from all teams should be involved. By consistency, I mean using the same technologies and architectural patterns for the same purpose. Overlaps can mean implementing the same components twice or components very similar in nature. E.g. if you implement an autosuggestion input component for cities, while you have already an autosuggestion generic component, then you’re overlapping. It’s better to find a way to integrate a city suggestion service into the generic component.
Before starting to implement a component, check to see if you cannot use an existing one, even though it means some degree of customization.

Architecture: Reuse

As I told one of the main reasons of a governance is to reuse as much as possible. Either we’re talking about your code or third party libraries. Do not reinvent the wheel, it will always be round and, unfortunately, maybe not from the first tries.

Architecture: Adopt the new

As a rule of thumb, always prefer to use the new version, either we’re talking about a standard or a JavaScript library. This way your site will take better the test of time. Of course, the problem is the browser support (read IE support), but there are usually workarounds.
Use the new HTML tags instead of the deprecated ones. STRONG is preferable to B, as it is related to semantic instead of layout.
Use CSS transitions instead of JavaScript animations. CSS transitions gives better performance (as they could make use of GPU), but JavaScript animations could be used as a fallback, when browser don’t offer support.

Architecture: Create extensive documentation

Document the development process, patterns, choice of technologies. Heavily comment your code. Document all the components and create some kind of easy browsable index. Consistency should be a concern when it comes to documentation too, by using the same style and tools throughout all teams.
Extensive documentation will reduce the learning curve, especially for new resources. It will also reduce the developers involvement in the deployment and support for component users.

Architecture: Encapsulate JavaScript/CSS into easy to use components

I mentioned earlier about different roles in your organization. Could be that the users of your components don’t have a high degree of knowledge in JavaScript and/or CSS (Please keep in mind that JavaScript is the most misunderstood language). E.g. YahooUI is great library, but it requires JavaScript knowledge. Same goes for jQuery components. Instead of this approach, or actually over this approach, you should use a very simple principle: convention over code. Just to take out the JavaScript part out of the equation you could use CSS classes to annotate, by convention, your component. E.g. instead of $("TABLE.whatever").tablesorter() you can simply use .sorted CSS class and include $("TABLE.sorted").tablesorter() in your main JavaScript as an onload event. Of course, this will impact the initial load performance, but there are always trade-offs :).

Performance and maintanability: Reduce the number of global resources

Reducing the number of global resources will reduce the number of requests and improve performance. But performance is not the only concern, maintainability is there too. It’s easier to deploy one resource and it’s easier to update one reference. By resources I mean stylesheets, JavaScript files, images, fonts etc.

The number of stylesheets and JavaScript files can be easily reduced to one. In case of CSS, frameworks like Less or Sass will be of great help and in case of JavaScript, frameworks like AMD. These frameworks will help you modularize and organize your development environment, while you will still be able to deploy only one resource into production. It makes it easy to integrate even third party resources.

When it comes to images, icons more specifically, icon fonts is the key. These are supported on many browsers, reduces all icon requests to one, they’re scalable in size and color. The only disadvantage is that you cannot have multicolor icons. You can combine icons to obtain this, but this could become cumbersome.

Naming: Prefer short, but meaningful names

As for the naming try to use generic, but meaningful names for everything, from resources to CSS classes. E.g. for resources prefer using “global.css” instead of “global_v3_blue.css” as you should stick to them for a long time. This way moving to a new version you will still have a meaningful name. Also make use of server side processing if you want to personalize the experience for different users, geographies, browsers etc. E.g. if you want to have two different CSS, one for mobile devices and one for desktop browsers, then refer to both as “global.css”, but do a server side redirect based on device detection.
As for the CSS classes, I already wrote about this subject.

Naming: Use the semantic of HTML tags

Use the semantic of HTML tags instead of defining new CSS classes, where appropriate. It is preferable to use H1 instead of .title and H2/3 instead of .sectionTitle.

Naming: Use as few CSS classes as possible

This falls into the same reuse category. It will be harder to remember which class is for what if you have hundreds of them and when to use one over the other. Prefer using contextual references in CSS selectors or combining existing classes rather than creating new classes. E.g. do not create a new class .right-menu, but use two classes .menu.right. Of course a better naming will be .menu.contextual.
So reuse comes even to CSS class names.

These are simple rules, but following it’s not always easy. But they will help you get an easy maintainable website so you can easily react to industry changes.

Categories: Web

Innovation Summit 2013 in Bucharest

May 30, 2013 Leave a comment

Innovation Summit just happened in Bucharest this week. In a few words: some interesting presentations, given in a more or less attractive and engaging way. Most of the things were not new, but it’s nice to hear them again as a confirmation and most importantly presented in a different way. Also, a lot of study cases were presented which always brings confirmation to the speaker’s ideas and even yours if similar.

Petru Jucovschi, Technical Lead for Windows and Windows Phone – The next level of Digital Innovation This was a presentation about innovation in user interfaces with a strong focus on the so called Metro interface for Windows 8. You can read more about it here or here. Embedding HTML5/CSS3/JS into the OS and as a application platform was presented as an innovative approach. It isn’t. Palm had webOS years ago. Indeed, HP killed it, but this was on the market years ago.

Meldrum Duncan, Innovation Consultant, Founder ?What IF! US – Oh, the mistakes I’ve made: 13 years of running innovation projects. Clearly an engaging speaker and an attractive presentation. He focused on 10 simple principles of driving innovation. Among the principles were also the ones that engage collaboration outside the office, like “Drink beer!”. Yes, yes, to engage with other colleagues, stakeholders, clients in a friendlier environment, which will open new thinking.

Windahl Finnigan, Associate Partner, UX Innovation & Strategy Director for Smarter Commerce at IBM – Full Tilt Boogie of Disruption was a too corporate speech for my taste. Even though with some good and interesting examples, the presentation was less engaging and more like a flat line.

Matt Rosa, Innovations Director at BAT – An Iterative Consumer-Centric Approach to Product Innovation discussed about tobacco industry innovation with a focus, of course, on BAT. Innovation on this field was close to zero up until 1999, but it started to cover multiple topics since then. From a process standpoint he brought the idea of going outside the organization to external agencies for a fresh and different view. Personally, I’m just glad, that with all the innovation, the base of tobacco customers is shrinking each year.

Martin Hablesreiter, designer / artist / filmmaker at honey and bunny studio – Food Design is more important than politics. The innovative capacity of food design. I liked the presentation style, the content, the slides, but for me, food is just food and what it counts is its nutritional and health value. Martin discussed in an ironic way the many influences on food design: functionality, culture, transportation, economic feasibility etc. But imagine the visual impact of the photos on the slides right before lunch :).

Susan Choi, Director of Innovation at Mandalah – How the heart and pulse of culture can lead companies to culturally relevant and successful innovation? was an advocate for local research before jumping to conclusion or using any well known stereotypes. As a case study she used a campaign launched in Middle East, where, surprisingly, they acknowledge an entire youth movement towards the digital space and into innovative solutions.

Daniel Jurow from R/GA started his presentation with a bit of his company history and how they expanded.
I clearly remember one of the slides: “93% of executives say their long term success depends on their ability to innovate, but only 18% said that heir innovation strategy works.” The speaker said that is happening because they are not taking bold steps, but smaller safe ones. I would say that this is the result when you base your innovation and related decisions on spreadsheets.

Instead of the old formula for growth either through horizontal (line extension backed up by a serious mass-media campaign) or vertical integration, Daniel debated the concept of functional integration. Functional integration is the new idea to extract value of your existing ecosystem. In plain English, reach your existing customers and sell them additional products and services – maybe some they don’t even need, but they’re so cool :). Examples are clear: Apple, Google and at the summit I heard about BMW and Nike. Yes, Nike with Nike+ which allowed them to gain a huge market share. As a runner, I like minimalist shoes and trail running, and Nike don’t have options there, so I don’t use them. But it seems others really like to share their exercises on Facebook, so …

Of course, the only integration platform for end-user customers is the web.

In the end, the speaker was recommending to not restrict innovation to a single group, but encourage a culture of innovation in the company, an idea reiterated by almost all the speakers after.

Erez Tsalik
If I said in the beginning that most of the things presented. Not the case here, at least not by approach. Erez contradicted almost everything the other speakers said and he took a clear analytical approach to the process of thinking innovation. But he also reiterated that innovation needs commitment throughout the company, starting with the executive levels. Managers needs to know how innovation works and how to manage innovative people, because usually managers do not know to distinguish between a good innovative idea and a bad one. That’s the reason they should not validate innovative projects and ideas.

He discussed some of the classical approaches toward the process of innovation (if there is one). Brainstorming seems not to get the best results, because people being separated generate more ideas than together. Why? They don’t waste time listening to others. But I would think that brainstorming is good as a next step to validate the ideas and select the feasible one.

Think out-of-the-box! Or better NOT! Out-of-the-box means no rules, but with businesses constraints exists. Constraints enhance innovation, it is said that necessity is the mother of all inventions.

Another interesting topic was fixedness – symmetrical, structural, functional etc. Which is normally good. But if you want innovation you have to make an effort and break it.

Altogether, an interesting and intriguing presentation with lots of good examples. The kind of presentation that makes you think: “But what if I try?”

Alexandru Cernatescu, Co-Founder, CEO & Head of Strategy at Infinit Solutions Agency – Romanian Reality Innovation Update – What stands behind a Romanian story of success based on innovation?. Unlike the other speakers, Alex didn’t focused on innovation process, study cases or ideas. He presented, in a very dynamic way, how innovation was the basis of his entrepreneurship and how with will and abnegation can overcome obstacles and write yourself a success story.

As participants, we also had the opportunity to play with innovative products: transparent displays, latest laptops that double as a tablet, Windows phones. And we even took part in molecular food demo with Bernd Kirsch, Executive Chef of Radisson Blu, as an innovation in gastronomy. The opportunity to sample the end results was clearly a delight for all of us.

I would like to finish with a few recommendations that I liked and were shared by most of the speakers. Innovation is not limited to a small group in the company, but it should be encouraged as part of the internal culture. Managers (especially executives) should not be pose as innovation judges, as this is most probably their main focus and strength. The innovation process, if any, is ever changing and it should be part of the innovation itself.

Categories: Web
Follow

Get every new post delivered to your Inbox.

Join 749 other followers