IMW 2015 or How to make an (un)interesting presentation

October 8, 2015 Leave a comment

I’ve just attended for the fourth (I guess) year in a row the biggest Romanian conference on Internet and mobile. Bigger and bigger each year, I always had a love-hate relationship with Internet and Mobile World. At the beginning I say that it’s not interesting and it’s probably the last year I will attend, but then a few presentations and exhibitors change my mind. This year was no different.

I read lately a Dale Carnegie book and he was saying that if you want to sell something you shouldn’t talk about you and your product, but about the customer need. Nothing but true. In this idea, I saw some really boring presentations. First of all, some of them were given by managers or CEOs. I don’t have something against CEOs, but let’s get one thing straight. They got there, not because they know how to captivate the audience, but because they know how to run a business. Unfortunately, some managers, as soon as they become managers they also instantly gain access to the entire world knowledge and become proficient in every skill known to mankind. No! Again, they became managers because they recognized talent and they knew how to acquire it into their business. They should do the same here – get someone else to give a really interesting and captivating presentation.

I understand that you want to brag about what cool things are you creating and how good your company is doing. But, me in the audience, I’m also doing cool things. And if I’m not, I probably hate you. Or I simply don’t care about it. You don’t care either about my needs or interests, why shouldn’t this be a fair relationship?

I can easily get a financial report, projection or a product/service portfolio from any company website. So don’t come and present long and boring slides about any of these topics. But if you talk to me about any of my needs and interests and then slowly you introduce me on how a new technology can help me you can arouse my curiosity. And now that you got me, you can also happen to mention about your cool and innovative product that is the incarnation of that technology. And don’t make any false claims, either on the product or your expertise on the field. If I realized it and I will surely do, if I’m interested on the subject, taking into account the multiple channels of gathering information nowadays, then I will drop your presentation like a rotten apple. As a shadow of doubt, at the very best, felt on its entirety.

High level managers of large corporations, which they haven’t evolve from startups, usually tend to have a results oriented presentation. Whoa! I did not pay a ticket to came and make your business here. I got it – your company is the greatest and I think we established that already. Developers on the other hand tend to lose themselves in small technical details – I even saw slides of code in front of a general audience. Whoa again! I did not came here to work. I came to get fresh ideas, new contacts, to see where the market (not a company or product) sits as a whole. So you better arouse me, show me something cool, interesting, spiced with clear use cases. Walk in my shoes and show me a road to your product, but an interesting road.

Few presentations were like this, but those that were, they felt in the interesting area, even though I knew some of their content. And for God’s sake, speak loudly and in no way in a monotone voice. And if you don’t master English better ask your audience if you can do your presentation in your mother tongue.

I also need to highlight an interesting idea materialized in a cool product by a Romanian company. Altom built a robot that by using two motors and a camera was able to automate testing on real(!) devices like cameras and tablets. Using the camera it was recognizing images and then it was able to move on XY axis and tap on the device. And you could write your testing scenarios directly from your IDE, just like a Selenium test case! I would vote them as the most innovative product on this fair.

In general, the market trends seemed to be streaming and video on mobile and Internet of Things. Robotics seemed to be catching on, but this was always the case with this field for years and years, having spikes, but never really becoming mainstream. Indeed, in different forms and shapes they already entered our life, but not at the expectancy that SF fans always hoped. Cloud moved more to a mainstream level, which is kind of the case. I would definitely not think today of creating my own infrastructure, no matter the size of the project or company.

So, all in all, IMW 2015 wasn’t a waste of time with some, not majority, interesting presentations and exhibitors. Will I go next year? Will see :).

Categories: Technology

Shift an array in O(n) in place

April 7, 2015 Leave a comment

Below I copied the code for shifting an array in place.

void shift(Object[] array, int startIndexInclusive, int endIndexExclusive, int offset) {
if (array == null) {
    return;
}
if (startIndexInclusive >= array.length - 1 || endIndexExclusive <= 0) {
    return;
}
if (startIndexInclusive = array.length) {
    endIndexExclusive = array.length;
}        
int n = endIndexExclusive - startIndexInclusive;
if (n  0) {
    int n_offset = n - offset;
    
    if (offset > n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n - n_offset,  n_offset);
        n = offset;
        offset -= n_offset;
    } else if (offset < n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset,  offset);
        startIndexInclusive += offset;
        n = n_offset;
    } else {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset, offset);
        break;
    }
}

The swap(array, index1, index2, len) method swaps in the given array the elements from [index1, index1 + len) with the ones [index2, index2 + len).

Even though it may seem complicated at first, the idea is pretty simple. If the offset is half of array length, or in other words if offset == (n – offset), where n is the total number of elements to be shifted, then the shift is equivalent of swapping the two halves of the array.
For the first two cases we swapped the portions at the ends and one of them will come in place and we will continue the iteration for the rest as shown in the below figure.

shift algorithm

Space complexity is clearly O(1), but what about time complexity. I’m gonna prove that it is O(n).

Let Sh(n, k) be the problem of shifting k positions in an array of size n and Sw(k) be the problem of swapping k elements in an array. For the sake of simplicity I left out the start and end indices.

It is obvious that O(Sw(k)) = O(k).

It is also obvious that O(Sh(1, k)) = O(1), with k < 1. Also O(Sh(x, 0)) = O(1).

Now let's assume that O(Sh(n, k)) = O(n), whatever n, with k < n. I'll try to prove that O(Sh(n + 1, k')) = O(n), with k' < n + 1.

Analyzing the algorithm we have

O (Sh(n + 1, k’)) = max(
O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)), if 2k’ > n + 1
O(Sw(k’)) + O(Sh(n + 1 – k’, k’)), if 2k’ < n + 1
O(Sw(k')), if 2k' == n + 1
)

  • First case 2k’ > n + 1

    O(Sh(n + 1, k’)) = O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)) = O(n) + O(Sh(k’, 2k’ – n – 1))

    Because (k’ < n +1) ⇒ (k' ≤ n) and (2k' ≤ 2n) ⇒ (2k' – n – 1 ≤ n – 1) ⇒ (2k' – n – 1 < n), then O (Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Second case 2k’ < n + 1

    O(Sh(n + 1, k')) = O(Sw(k')) + O(Sh(n + 1 – k', k')) = O(k') + O(Sh(n + 1 – k', k')) = O(n) + O(Sh(n + 1 – k', k'))

    Because (0 < k') ⇒ (n + 1 – k' < n + 1) ⇒ (n + 1 – k' ≤ n) and (k' < (n + 1)/2) ⇒ (k' ≤ n/2) ⇒ (k' < n), then O(Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Third case 2k’ == n + 1

    O(Sh(n + 1, k’)) = O(Sw(k’)) = O(n)

So O(Sh(n + 1, k’)) = O(n). As a consequence O(Sh(n, k)) = O(n), whatever n, with k < n.

The code will become part of commons-lang, ArrayUtils class, as of version 3.5.

Categories: Web

I had it with Maven

April 6, 2015 3 comments

Initially I was a big Maven fan. It was the best build tool. When coding in C I was using make and after switching to Java, naturally Ant was next.

But Maven was so much better. It had a few things that you cannot do anything but love them. First of all, dependency management. To get rid of downloading the jars, include them in a lib folder, and even the bigger pain of updating them … wow … to get rid of all of these was a breeze.

Maven had also a standard build lifecycle and you can download a project and start a build without knowing anything about it. You could have done this in Ant, but there projects should have followed a convention, which wasn’t always the case. In all honesty, it wasn’t even existing one :), at least a written formal one.

And then Maven came with a standard folder structure. If you pickup a new project it’s clearly easier to understand it and find what you’re looking for.

And … that’s about it. I would like to say that the fact that uses XML was also a good thing. XML is powerful because it can be easily understood by both humans and computers (read programs). But no other tool, except Maven, was interested in understanding its POM. And if for describing and formalizing something, like a workflow, XML is great, using it for imperative tasks – not so. Ant was doing this and going through an Ant build wasn’t the easiest task of all.

Maven was also known for its verbosity. If you go to mvnrepository.com and take any package in there you’ll clearly see the most verbose one:

<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf</artifactId>
    <version>2.1.4.RELEASE</version>
</dependency>

as compared to Gradle for example

'org.thymeleaf:thymeleaf:2.1.4.RELEASE'

.

And that’s for adding just one dependency. If you have 30 or more in your project, which for a web application is not uncommon, you end up with a looot of code …

And then it comes the customization. It is very good that Maven comes with a standardized lifecycle, but if it’s not enough (and usually it isn’t) it’s very hard to customize it. To do something really useful you’ll need to write a plugin. I know that there is the exec plugin, but it has some serious drawbacks and the most important one is that it cannot do incremental builds. You can simply run a command, but you cannot do it if only some destination files are outdated compared to their corresponding sources.

So, I needed something else. I looked a little bit over some existing build tools and none of them seemed very appealing, but I ended up switching to Gradle. Two reasons: Spring is using it (I’m a big fan of many Spring projects) and I wanted to also get acquainted with Groovy.

While switching a web application project, which took me 3 days, I ended up with a 10kb build file instead of 36Kb and many added features. I was able to build my CSS and JS using Compass/Sass and browserify and more importantly incrementally (but as a whole).

I was also able to better customize my generated Eclipse project, including specify derived folders. As a side note, for improved Gradle support in Eclipse you need to install Gradle IDE from update site http://dist.springsource.com/release/TOOLS/update/e4.4/ and miminum required version is 3.6.3 – see here why. You may need to uncheck Contact all update sites during install to find required software if you get an installation error.

Gradle is probably not the dream build tool, it has a very loose syntax, it combines descriptive properties with imperative task reducing readability, but it’s less verbose and much more flexible than Maven. Probably with some coding standards applied on top, it could become a good choice.

Categories: Software Tags: , ,

Smart laser tag

February 8, 2015 Leave a comment

Laser tag is still a niche market, but it caught up a little bit in the last years. There are few things before becoming mainstream.

Even though you can play it virtually anywhere, either in the office or in the woods, it is mostly spread in the arenas. And I think the problem is the equipment. It is bulky, hard to set up and most of the times needs a central computer to work at its full potential.

Of course, there are outdoor versions too. They are built around a microcontroller with an uploaded software. Here the problem is that we’re having limited computing capabilities and as a consequence limited features too. Lately, on Kickstarter, there are projects, where the guns are built using an Arduino-like board and sometimes even a touch screen. Extended capabilities and features.

Another issue are the sensors. A laser tag system is practically made of a gun and a set of sensors. The gun using an infrared LED emits a ray which, if it is intercepted by a sensor, it is considered a hit. When the “life” points are completely drained, the gun works no more. Sensors are usually small and mounted on vests, headbands or guns. Why I say that this is an issue? First of all, if it they’re mounted on the gun, this is not too realistic, right? And also the sensors are directly connected to a gun. Again not too realistic – you cannot have two guns, you cannot pick up a killed enemy gun. So it will be nice if you can disconnect the gun from the sensors.

And when it comes to guns you have a limited amount of models. As the systems are incompatible, building many gun models is not feasible.

Another thing is the price of such a system. The cheapest will usually go over $100 a piece.

My idea is that a very realistic, loaded with features and cheap laser tag system can be built. To address the issue of the microcontroller (and its limited computing capability) and the price, the idea of using a smartphone instead is obvious. Nowadays, almost everyone has a smartphone and this will greatly reduce the final cost of the system (we will not count the smartphone price, as you already have it).

Then we can easily address the gun-sensor connection. A gun will have a Bluetooth module, a microcontroller (much smaller) and an infrared LED. And a battery, lens and two buttons: trigger and connect. The connect button allows to connect a gun to a Practically a gun will transmit through the LED whatever it receives from the smartphone through the Bluetooth receiver. There is still a microcontroller, but it is “thinner”, smaller and cheaper.

This way a gun can be handled by more than one player, of course, not at the same time. And the gun can be built as a module that can be mounted as a scope on any existing guns, like an airsoft gun. So you also have a wide selection and even blow-back guns.

And the sensors, mounted on the vest, headband or both, will connect directly to the smartphone and will be the ones to control the “life” points. It can connect to the smartphone through Bluetooth or USB, but then the smartphone must support USB OTG (which is becoming more and more frequent). And this will be solving another issue: battery. I know that the smartphone usually has a smaller battery life, but external power banks are now cheaper and common and many will most probably already have one. For convenience you can even have the smartphone on your favorite armband. So the sensor vest, maybe connected with a headband, will be just a network of infrared receiver with different circuits if you want different weight of hits. This way the system allows head shots for the most skilful ones. The vest can be connected only to one smartphone and if the “life” points are drained, then the app will declare the player dead who will be able to transmit to the gun.

Now going back to the laser tag system as a smartphone app. What new features does this bring?

Practically there’s no smartphone without GPS. Now imagine that you can have a map on which you can see the position of your troops (and even their “life” points status), you can launch virtual airstrikes and even set up virtual proximity mines (with friendly fire or not) . It does bring new dimensions to the game, isn’t it? And if the smartphone is actually a Google Glass-like device, augmented reality will open new opportunities. You can have this on a smartphone too, but, if not mounted somehow on the gun, it is harder to handle to actually view the augmented reality..

New games can be easily invented, like capture the hill or capture the flag (virtual flags). If you pass a certain area, you capture a flag and a notification can be sent to everyone. Even games like escorting prisoners are easy within reach.

Customization now becomes the sweetest part of such a system. Beside the “life” points, you can have armour. Or doctors. Or prisoners. Or you can even do virtual drop-off that can be picked up by players by just going in the exact area. They can contain drugs (life points), armour, shells.

And to have it all, if you build such a system in a plug-in-able way, then anyone can develop new guns, games or any other feature and share them with the entire community. Imagine that you use PhoneGap and everyone can develop a plugin using JavaScript. A web laser combat game – just wow!

Now to talk a little bit about the price, production price I mean.web laser

Laser tag is still a niche market, but it caught up a little bit in the last years. There are few things before becoming mainstream.

Even though you can play it virtually anywhere, either in the office or in the woods, it is mostly spread in the arenas. And I think the problem is the equipment. It is bulky, hard to set up and most of the times needs a central computer to work at its full potential.

Of course, there are outdoor versions too. They are built around a microcontroller with an uploaded software. Here the problem is that we’re having limited computing capabilities and as a consequence limited features too. Lately, on Kickstarter, there are projects, where the guns are built using an Arduino-like board and sometimes even a touch screen. Extended capabilities and features.

Another issue are the sensors. A laser tag system is practically made of a gun and a set of sensors. The gun using an infrared LED emits a ray which, if it is intercepted by a sensor, it is considered a hit. When the “life” points are completely drained, the gun works no more. Sensors are usually small and mounted on vests, headbands or guns. Why I say that this is an issue? First of all, if it they’re mounted on the gun, this is not too realistic, right? And also the sensors are directly connected to a gun. Again not too realistic – you cannot have two guns, you cannot pick up a killed enemy gun. So it will be nice if you can disconnect the gun from the sensors.

And when it comes to guns you have a limited amount of models. As the systems are incompatible, building many gun models is not feasible.

Another thing is the price of such a system. The cheapest will usually go over $100 a piece.

My idea is that a very realistic, loaded with features and cheap laser tag system can be built. To address the issue of the microcontroller (and its limited computing capability) and the price, the idea of using a smartphone instead is obvious. Nowadays, almost everyone has a smartphone and this will greatly reduce the final cost of the system (we will not count the smartphone price, as you already have it).

Then we can easily address the gun-sensor connection. A gun will have a Bluetooth module, a microcontroller (much smaller) and an infrared LED. And a battery, lens and two buttons: trigger and connect. The connect button allows to connect a gun to a Practically a gun will transmit through the LED whatever it receives from the smartphone through the Bluetooth receiver. There is still a microcontroller, but it is “thinner”, smaller and cheaper.

This way a gun can be handled by more than one player, of course, not at the same time. And the gun can be built as a module that can be mounted as a scope on any existing guns, like an airsoft gun. So you also have a wide selection and even blow-back guns.

And the sensors, mounted on the vest, headband or both, will connect directly to the smartphone and will be the ones to control the “life” points. It can connect to the smartphone through Bluetooth or USB, but then the smartphone must support USB OTG (which is becoming more and more frequent). And this will be solving another issue: battery. I know that the smartphone usually has a smaller battery life, but external power banks are now cheaper and common and many will most probably already have one. For convenience you can even have the smartphone on your favorite armband. So the sensor vest, maybe connected with a headband, will be just a network of infrared receiver with different circuits if you want different weight of hits. This way the system allows head shots for the most skilful ones. The vest can be connected only to one smartphone and if the “life” points are drained, then the app will declare the player dead who will be able to transmit to the gun.

Now going back to the laser tag system as a smartphone app. What new features does this bring?

Practically there’s no smartphone without GPS. Now imagine that you can have a map on which you can see the position of your troops (and even their “life” points status), you can launch virtual airstrikes and even set up virtual proximity mines (with friendly fire or not) . It does bring new dimensions to the game, isn’t it? And if the smartphone is actually a Google Glass-like device, augmented reality will open new opportunities. You can have this on a smartphone too, but, if not mounted somehow on the gun, it is harder to handle to actually view the augmented reality..

New games can be easily invented, like capture the hill or capture the flag (virtual flags). If you pass a certain area, you capture a flag and a notification can be sent to everyone. Even games like escorting prisoners are easy within reach.

Customization now becomes the sweetest part of such a system. Beside the “life” points, you can have armour. Or doctors. Or prisoners. Or you can even do virtual drop-off that can be picked up by players by just going in the exact area. They can contain drugs (life points), armour, shells.

And to have it all, if you build such a system in a plug-in-able way, then anyone can develop new guns, games or any other feature and share them with the entire community. Imagine that you use PhoneGap and everyone can develop a plugin using JavaScript. A web laser combat game – just wow!

Now to talk a little bit about the price, production price I mean. The gun module has a Bluetooth chip, a microchip, an infrared LED, a lens, a battery enclosure and two buttons. The parts are all probably under $40. The sensor vest and headband have infrared receivers, microchip and USB connector. All together under $40. So you have a full featured laser combat system, easy extendable through software plug-ins for under $100. But this without counting the smartphone, power bank and software. But if the latter would be a community open-source effort … Maybe soon …

Categories: Ideas Tags: ,

WebRTC saga

February 5, 2015 Leave a comment

Recently I started using WebRTC. Cool technology. You can build a web chat in just tens of lines of code, you can take pictures with your webcam and using canvas you can manipulate them. And these are just a few examples of what WebRTC can do for you and your web app.

Tens of lines of code indeed. But not easy lines at all. Even though WebRTC will enable peer-to-peer communication, you still need some server components. These server components are involved in the session initiation.

First there is a ICE/STUN/TURN server that it’s used for a client to discover its public IP address if it is located behind a NAT. Depending on your requirements could not be necessary to build/deploy your own server, but use an already public (and free) existing one – here‘s a list. You can also deploy an open source one like Stuntman.

Then it comes the signaling part, used by two clients to negotiate and start a WebRTC session. There is no standard here and you have a few options.

You can use an XMPP server with a Jingle extension. Developing your own XMPP server (or component) just for this will be an overkill, so you should definitely consider using an existing one. But even installing, configuring and integrating an existing one could be too much if you don’t use it for anything else. But if you already have one in your infrastructure, you could hook up into it. On the client side, you can use an existing XMPP JavaScript library.

You can also use SIP, a protocol much more encountered for VoIP. Like XMPP, SIP it is too much to be used just for WebRTC signaling. If you already have it in your infrastructure, then could be easy to use it. As for SIP, in Java you have two development options.

The high level one is SIP servlets with its most notable implementation, Mobicents. If you develop a non-GPL application, Mobicents commercial license could be quite expensive just for WebRTC signaling.Mobicents is developed on top of Tomcat or JBoss, so, depending on your environment, this could also be a drawback.

The lower level option for SIP is JAIN-SIP, on which even Mobicents is developed on. It is completely free. There you have different protocol options, like TCP, UDP or websockets. And for WebRTC the latter will be more appropriate.

With both options, you’ll still have to develop the server logic and SIP is a pretty cumbersome protocol so prepare yourself for a few headaches.

If you decide for SIP, on the client side, things could be a little bit brighter. There are already JavaScript SIP signaling solutions that you can easily integrate into your web applications. I looked into JsSIP and SIPJS and I ended up using the latter. SIPJS is actually forked from JsSIP, but it encapsulates the intricacies of the protocol better, which makes it a little bit easier to integrate.

Another option is to develop your own signaling protocol using something like websockets. Then you have to develop from scratch both the client and the server side part. Developing the server side part, in my opinion, will be easier than the previous options. You have to practically develop a messaging system, that forwards messages between users, without caring too much about the content. @ServerEndpoint abstracts development in an elegant manner and you can even use your existing authentication system as you can bind a websocket session with the HTTP one.

If you’re worried about websockets browser support, don’t. Wherever WebRTC is supported, Web Sockets are too.

In the client side, things will be a little bit more complicated, as you will have to develop the entire message flow, the server will just forward your messages to the appropriate peer. This entire flow should be asynchronous, something like a request-response paradigm.

The development difficulty arises here mainly from the different behavior on various browsers. And by various, I mean Firefox and Chrome, the main ones supporting WebRTC.

When it comes to WebRTC you don’t have to code anything for actually streaming, but only for initiating the session. Theoretically, WebRTC session initiation process is as follows. Let’s suppose A wants to talk B. And for the sake of simplicity we will use A signals B with the meaning that A sends a message to B through the signaling channel.
1. A will create and initialize an RTCPeerConnection. Few things involved here: a list of addresses of the ICE/TURN/STUN servers, a local stream to share with B.
The list of ICE servers is passed in the constructor, the local stream is obtained through navigator.getUserMedia() and added afterwards.
2. Using that connection A creates a SDP(session description protocol) offer.
3. A signals B the offer
3. When B receives the SDP offer from A (or even before) will create the RTCPeerConnection like in step 1.
4. B will set the remote session description like RTCPeerConnection.setRemoteDescription(new RTCSessionDescription(SDPOfferFromA)), where SDPOfferFromA is the JSON object received through the signaling channel.
5. B will create a SDP answer.
6. B signals A the answer
7. When A receives the answer, sets the remote description like in step 4, using SDPAnswerFromB.
8. Asynchronously, when a new ICE candidate is discovered and presented through the onicecandidate event of RTCPeerConnection, A (or B) should signal it to B (or A). When the other party receives it, B (or A) will add it using RTCPeerConnection.addIceCandidate(new RTCIceCandidate(candidateDescriptionReceived)).

Again, theoretically. In practice …

First of all, because WebRTC is not final, but in draft state, all the types are prefixed, like mozRTCPeerConnection (in Firefox) or webkitRTCPeerConnection (in Chrome and Opera). But this can be easily fixed.

window.RTCPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;

.

Then it comes the ICE candidates part. In Firefox the ICE candidates must be gathered before creating an SDP offer or answer. So before creating an offer/answer, check RTCPeerConnection.iceGatheringState and do it only if it is "complete".

There are also few inconsistencies when it comes to the media constraints passed to navigator.getUserMedia().

In the end, I can conclude that WebRTC is really cool and you can build media application in matter of tens or hundreds of lines of code. I guess it could have been designed to be easier to use. Even though there are a lot of opinions stating that the signaling protocol is better left out of the standard, I still think that it will be better if an optional one is included. Or at least a minimal easy to implement API.

Categories: Web Tags: ,

Choosing an XMPP server

January 28, 2015 1 comment

I was working lately with WebRTC. One of the biggest issues there is the signaling part.

As an option you can choose XMPP with its Jingle extension. So, naturally I was looking into a few XMPP servers. What are the requirements that I was chasing for these?

My architecture is a Java based, so I was looking into solutions based on this technology. I know that I can integrate different technologies through things like LDAP or web services, but … If I needed something custom, it would have been much easier to develop something in Java. So as a nice to have, the final solution should be extensible through some kind of plugin system. Another requirement was to be open source or at least affordable.

Taking these into account I narrowed down the list to OpenFire, Tigase, Apache Vysper and Jerry Messenger.

Apache Vysper aims to be a modular, full featured XMPP (Jabber) server. Unfortunately, it is not out-of-the-box to feature an easy installation and integration procedure. It also seems to be in a beta stage, not production ready.

Jerry Messenger has an embedded Jetty server, it can be easily configurable and it features a plugin-able system.

OpenFire seemed the most complete solution in its field. It has a very rich web admin interface, a plugin system and extensive documentation. As a bonus, it is just a web application so you can install it in your favorite web application server along with your other web applications.

Tigase claims to be the most scalable XMPP server supporting hundreds of thousands of concurrent users. It is configurable, standalone and plugin-able. Unfortunately the documentation is not as extensive.

Clearly, my preference went towards OpenFire and Tigase. But I ended up not using XMPP at all for WebRTC signaling. Why? All about it in the next article.

Categories: Web

Building enterprise web sites

August 26, 2013 Leave a comment

Responsive design is one of the latest and hottest topics in web design and development. Creating a responsive site, a site that offers a decent to good user experience, a site that follows accessibility guidelines it’s not easy, but it’s not that hard after all. It could take few iterations, but you’ll get there. But doing it at an enterprise level, that’s a whole different game.
First of all, what’s enterprise bringing to the table that hardens things? I’ll clarify just to know where we stand and to clearly define the requirements.

To start, enterprise usually means big. So we have volume. In terms of pages, visitors, infrastructure, human resources. Let’s see how all they’re influencing.

Lots of pages. This is probably one of the most important aspects. If you’ll develop each page individually, on its own, then you’ll waste money and time. And by the time you develop the last page, the first one becomes obsolete and needs to be redone. Of course, I assumed that you’ll want the same user experience (look and feel) on your entire website, which is the smart thing to do anyway and I won’t enter here into details why. So, coming back, there is clearly the need to reuse code in order to reduce development and maintainance effort and easily update all pages at once.

Lots of visitors. And this brings with it the need for performance. At every level: network and server infrastructure, code and process. Infrastructure is the foundation of your site and it’s also the main reason for stability (or instability) or, in other words, uptime (or downtime). If infrastructure is alright, then performance comes down to code. And on the web, this translates into server side execution time, number of requests, response size and client side execution time.

Big infrastructure. When talking about huge infrastructure, we will definitely see a heterogenous environment: different server-side technologies and different ages. So we will have outdated environment, outdated code, we will need to deploy to multiple environments simultaneously.

Lots of human resources. As many people will be involved in the development, there should be clear process and extensive documentation. Clear process will reduce the mistakes and inconsistency, while increasing development efficiency. Extensive documentation will decrease the learning curve and the need of direct support from developers. Also you should take into account that people with different backgrounds and skills will all work together. If we’re able to make development easier we will be able to reduce costs, either by reducing the need for high level skills or reducing the development time.

Now that we’ve seen what an enterprise site means and what are the basic requirements, I will just go through a list of good practices. The list of requirements could actually be bigger than the ones above, depending on your business needs. Also the following best practices are focusing only on client side development.

Architecture: Use a modular architecture

Modularize as much as possible. Besides a core framework, everything should be described and implemented as a component. Components will be assembled together to create pages, microsites or web applications. This will lead to many advantages. Different teams can implement different components. The user experience can be changed in its entirity or partially, but the most important aspect is that it can be done globally. Updating a component will update all the pages using it.
Beside the component developer, we identify the role component user, which is the person that will assemble a page, microsite or web application from the existing components.

Architecture: Use a governance process

Even though the component development process can be easily descentralized, ensuring consistency and avoiding development overlaps should be done centrally through some kind of governance process. Representatives from all teams should be involved. By consistency, I mean using the same technologies and architectural patterns for the same purpose. Overlaps can mean implementing the same components twice or components very similar in nature. E.g. if you implement an autosuggestion input component for cities, while you have already an autosuggestion generic component, then you’re overlapping. It’s better to find a way to integrate a city suggestion service into the generic component.
Before starting to implement a component, check to see if you cannot use an existing one, even though it means some degree of customization.

Architecture: Reuse

As I told one of the main reasons of a governance is to reuse as much as possible. Either we’re talking about your code or third party libraries. Do not reinvent the wheel, it will always be round and, unfortunately, maybe not from the first tries.

Architecture: Adopt the new

As a rule of thumb, always prefer to use the new version, either we’re talking about a standard or a JavaScript library. This way your site will take better the test of time. Of course, the problem is the browser support (read IE support), but there are usually workarounds.
Use the new HTML tags instead of the deprecated ones. STRONG is preferable to B, as it is related to semantic instead of layout.
Use CSS transitions instead of JavaScript animations. CSS transitions gives better performance (as they could make use of GPU), but JavaScript animations could be used as a fallback, when browser don’t offer support.

Architecture: Create extensive documentation

Document the development process, patterns, choice of technologies. Heavily comment your code. Document all the components and create some kind of easy browsable index. Consistency should be a concern when it comes to documentation too, by using the same style and tools throughout all teams.
Extensive documentation will reduce the learning curve, especially for new resources. It will also reduce the developers involvement in the deployment and support for component users.

Architecture: Encapsulate JavaScript/CSS into easy to use components

I mentioned earlier about different roles in your organization. Could be that the users of your components don’t have a high degree of knowledge in JavaScript and/or CSS (Please keep in mind that JavaScript is the most misunderstood language). E.g. YahooUI is great library, but it requires JavaScript knowledge. Same goes for jQuery components. Instead of this approach, or actually over this approach, you should use a very simple principle: convention over code. Just to take out the JavaScript part out of the equation you could use CSS classes to annotate, by convention, your component. E.g. instead of $("TABLE.whatever").tablesorter() you can simply use .sorted CSS class and include $("TABLE.sorted").tablesorter() in your main JavaScript as an onload event. Of course, this will impact the initial load performance, but there are always trade-offs :).

Performance and maintanability: Reduce the number of global resources

Reducing the number of global resources will reduce the number of requests and improve performance. But performance is not the only concern, maintainability is there too. It’s easier to deploy one resource and it’s easier to update one reference. By resources I mean stylesheets, JavaScript files, images, fonts etc.

The number of stylesheets and JavaScript files can be easily reduced to one. In case of CSS, frameworks like Less or Sass will be of great help and in case of JavaScript, frameworks like AMD. These frameworks will help you modularize and organize your development environment, while you will still be able to deploy only one resource into production. It makes it easy to integrate even third party resources.

When it comes to images, icons more specifically, icon fonts is the key. These are supported on many browsers, reduces all icon requests to one, they’re scalable in size and color. The only disadvantage is that you cannot have multicolor icons. You can combine icons to obtain this, but this could become cumbersome.

Naming: Prefer short, but meaningful names

As for the naming try to use generic, but meaningful names for everything, from resources to CSS classes. E.g. for resources prefer using “global.css” instead of “global_v3_blue.css” as you should stick to them for a long time. This way moving to a new version you will still have a meaningful name. Also make use of server side processing if you want to personalize the experience for different users, geographies, browsers etc. E.g. if you want to have two different CSS, one for mobile devices and one for desktop browsers, then refer to both as “global.css”, but do a server side redirect based on device detection.
As for the CSS classes, I already wrote about this subject.

Naming: Use the semantic of HTML tags

Use the semantic of HTML tags instead of defining new CSS classes, where appropriate. It is preferable to use H1 instead of .title and H2/3 instead of .sectionTitle.

Naming: Use as few CSS classes as possible

This falls into the same reuse category. It will be harder to remember which class is for what if you have hundreds of them and when to use one over the other. Prefer using contextual references in CSS selectors or combining existing classes rather than creating new classes. E.g. do not create a new class .right-menu, but use two classes .menu.right. Of course a better naming will be .menu.contextual.
So reuse comes even to CSS class names.

These are simple rules, but following it’s not always easy. But they will help you get an easy maintainable website so you can easily react to industry changes.

Categories: Web
Follow

Get every new post delivered to your Inbox.

Join 751 other followers