PhoneGap setup

March 22, 2016 1 comment

It’s not the first time that I played with PhoneGap, but I haven’t done in quite some time. But I always liked the idea of creating a platform independent application. And if that application can be tested directly in the web browser, even better.

Creating a user interface in a descriptive language like HTML is easier as opposed to a programmatic approach where you have to write code to create your visual components. Nowadays, most frameworks also offer the descriptive approach, usually through XML, but learning a new language when you already know another one more powerful is not that appealing. HTML is also augmented by CSS that easily offers a high degree of customization and JavaScript that comes along with functionality. And all together create a platform-independent framework with a high degree of customization and a clear separation of layers.

So it’s clear why I like the idea of PhoneGap right from the start. Now, let’s set it up.

To develop a Phonegap application you don’t need to many things. The best thing will be to install nodejs and then phonegap: npm install -g phonegap.

Then you can create a sample application with phonegap create my-app, command which will create all the necessary files and subfolders under my-app folder.

Now it comes the testing part and for this you need to install PhoneGap Desktop. As I said, it’s nice that you can test your app directly in your browser by visiting the link displayed at the bottom of Phonegap Desktop window, e.g. http://192.168.0.1:3000 (hint: it doesn’t work with localhost or 127.0.0.1). And if you install PhoneGap Developer App you can easily test on your mobile too without the hassle of installing the application itself every time you make a change – changes will be automatically deployed (reloaded).

When you’re done it comes the fun part – actually building the application. Let’s do this for Android.

First you need to install JDK (I tested with version 8) and Android Studio.

And then you need to setup some enviroment variables

  • JAVA_HOME – this must be set to the folder where your JDK, not JRE, is installed.
  • ANDROID_HOME – this must be set to the folder where your Android environment is installed.
  • add to PATH the following %ANDROID_HOME%\tools;%ANDROID_HOME%\platform-tools;%JAVA_HOME%\bin in Windows or ${ANDROID_HOME}\tools;${ANDROID_HOME}\platform-tools;${JAVA_HOME}\bin in Linux

If the above are not correctly set or the PATH is invalid (like it has an extra quote(“) or semicolon(;)) you can run into errors like

  • Error: Failed to run "java -version", make sure that you have a JDK installed. You can get it from: http://www.oracle.com/technetwork/java/javase/downloads. Your JAVA_HOME is invalid: /usr/lib64/jvm/java-1.8.0-openjdk-1.8.0
  • Error: Android SDK not found. Make sure that it is installed. If it is not at the default location, set the ANDROID_HOME environment variable.

I also had to run

phonegap platforms remove android
phonegap platforms add android@4.1.1

By default I had installed Android 5.1.1, but I was getting the error Error: Android SDK not found. Make sure that it is installed. If it is not at the default location, set the ANDROID_HOME environment variable. You can check what platforms you have installed by running the command phonegap platforms list.

Make sure that you have all the Android tools and SDKs installed by running android on the command line and select all the ones not installed and install them.

Finally, you can build the application by running the following command in your project folder:

phonegap build android

and if everything goes well you’ll find your apk at <your-project-dir>/platforms/android/build/outputs/apk.

Categories: Software, Web

IMW 2015 or How to make an (un)interesting presentation

October 8, 2015 Leave a comment

I’ve just attended for the fourth (I guess) year in a row the biggest Romanian conference on Internet and mobile. Bigger and bigger each year, I always had a love-hate relationship with Internet and Mobile World. At the beginning I say that it’s not interesting and it’s probably the last year I will attend, but then a few presentations and exhibitors change my mind. This year was no different.

I read lately a Dale Carnegie book and he was saying that if you want to sell something you shouldn’t talk about you and your product, but about the customer need. Nothing but true. In this idea, I saw some really boring presentations. First of all, some of them were given by managers or CEOs. I don’t have something against CEOs, but let’s get one thing straight. They got there, not because they know how to captivate the audience, but because they know how to run a business. Unfortunately, some managers, as soon as they become managers they also instantly gain access to the entire world knowledge and become proficient in every skill known to mankind. No! Again, they became managers because they recognized talent and they knew how to acquire it into their business. They should do the same here – get someone else to give a really interesting and captivating presentation.

I understand that you want to brag about what cool things are you creating and how good your company is doing. But, me in the audience, I’m also doing cool things. And if I’m not, I probably hate you. Or I simply don’t care about it. You don’t care either about my needs or interests, why shouldn’t this be a fair relationship?

I can easily get a financial report, projection or a product/service portfolio from any company website. So don’t come and present long and boring slides about any of these topics. But if you talk to me about any of my needs and interests and then slowly you introduce me on how a new technology can help me you can arouse my curiosity. And now that you got me, you can also happen to mention about your cool and innovative product that is the incarnation of that technology. And don’t make any false claims, either on the product or your expertise on the field. If I realized it and I will surely do, if I’m interested on the subject, taking into account the multiple channels of gathering information nowadays, then I will drop your presentation like a rotten apple. As a shadow of doubt, at the very best, felt on its entirety.

High level managers of large corporations, which they haven’t evolve from startups, usually tend to have a results oriented presentation. Whoa! I did not pay a ticket to came and make your business here. I got it – your company is the greatest and I think we established that already. Developers on the other hand tend to lose themselves in small technical details – I even saw slides of code in front of a general audience. Whoa again! I did not came here to work. I came to get fresh ideas, new contacts, to see where the market (not a company or product) sits as a whole. So you better arouse me, show me something cool, interesting, spiced with clear use cases. Walk in my shoes and show me a road to your product, but an interesting road.

Few presentations were like this, but those that were, they felt in the interesting area, even though I knew some of their content. And for God’s sake, speak loudly and in no way in a monotone voice. And if you don’t master English better ask your audience if you can do your presentation in your mother tongue.

I also need to highlight an interesting idea materialized in a cool product by a Romanian company. Altom built a robot that by using two motors and a camera was able to automate testing on real(!) devices like cameras and tablets. Using the camera it was recognizing images and then it was able to move on XY axis and tap on the device. And you could write your testing scenarios directly from your IDE, just like a Selenium test case! I would vote them as the most innovative product on this fair.

In general, the market trends seemed to be streaming and video on mobile and Internet of Things. Robotics seemed to be catching on, but this was always the case with this field for years and years, having spikes, but never really becoming mainstream. Indeed, in different forms and shapes they already entered our life, but not at the expectancy that SF fans always hoped. Cloud moved more to a mainstream level, which is kind of the case. I would definitely not think today of creating my own infrastructure, no matter the size of the project or company.

So, all in all, IMW 2015 wasn’t a waste of time with some, not majority, interesting presentations and exhibitors. Will I go next year? Will see:).

Categories: Technology

Shift an array in O(n) in place

April 7, 2015 Leave a comment

Below I copied the code for shifting an array in place.

void shift(Object[] array, int startIndexInclusive, int endIndexExclusive, int offset) {
if (array == null) {
    return;
}
if (startIndexInclusive >= array.length - 1 || endIndexExclusive <= 0) {
    return;
}
if (startIndexInclusive = array.length) {
    endIndexExclusive = array.length;
}        
int n = endIndexExclusive - startIndexInclusive;
if (n  0) {
    int n_offset = n - offset;
    
    if (offset > n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n - n_offset,  n_offset);
        n = offset;
        offset -= n_offset;
    } else if (offset < n_offset) {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset,  offset);
        startIndexInclusive += offset;
        n = n_offset;
    } else {
        swap(array, startIndexInclusive, startIndexInclusive + n_offset, offset);
        break;
    }
}

The swap(array, index1, index2, len) method swaps in the given array the elements from [index1, index1 + len) with the ones [index2, index2 + len).

Even though it may seem complicated at first, the idea is pretty simple. If the offset is half of array length, or in other words if offset == (n – offset), where n is the total number of elements to be shifted, then the shift is equivalent of swapping the two halves of the array.
For the first two cases we swapped the portions at the ends and one of them will come in place and we will continue the iteration for the rest as shown in the below figure.

shift algorithm

Space complexity is clearly O(1), but what about time complexity. I’m gonna prove that it is O(n).

Let Sh(n, k) be the problem of shifting k positions in an array of size n and Sw(k) be the problem of swapping k elements in an array. For the sake of simplicity I left out the start and end indices.

It is obvious that O(Sw(k)) = O(k).

It is also obvious that O(Sh(1, k)) = O(1), with k < 1. Also O(Sh(x, 0)) = O(1).

Now let's assume that O(Sh(n, k)) = O(n), whatever n, with k < n. I'll try to prove that O(Sh(n + 1, k')) = O(n), with k' < n + 1.

Analyzing the algorithm we have

O (Sh(n + 1, k’)) = max(
O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)), if 2k’ > n + 1
O(Sw(k’)) + O(Sh(n + 1 – k’, k’)), if 2k’ < n + 1
O(Sw(k')), if 2k' == n + 1
)

  • First case 2k’ > n + 1

    O(Sh(n + 1, k’)) = O(Sw(n + 1 – k’)) + O(Sh(k’, 2k’ – n – 1)) = O(n) + O(Sh(k’, 2k’ – n – 1))

    Because (k’ < n +1) ⇒ (k' ≤ n) and (2k' ≤ 2n) ⇒ (2k' – n – 1 ≤ n – 1) ⇒ (2k' – n – 1 < n), then O (Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Second case 2k’ < n + 1

    O(Sh(n + 1, k')) = O(Sw(k')) + O(Sh(n + 1 – k', k')) = O(k') + O(Sh(n + 1 – k', k')) = O(n) + O(Sh(n + 1 – k', k'))

    Because (0 < k') ⇒ (n + 1 – k' < n + 1) ⇒ (n + 1 – k' ≤ n) and (k' < (n + 1)/2) ⇒ (k' ≤ n/2) ⇒ (k' < n), then O(Sh(n + 1, k')) = O(n) + O(n) = O(n).

  • Third case 2k’ == n + 1

    O(Sh(n + 1, k’)) = O(Sw(k’)) = O(n)

So O(Sh(n + 1, k’)) = O(n). As a consequence O(Sh(n, k)) = O(n), whatever n, with k < n.

The code will become part of commons-lang, ArrayUtils class, as of version 3.5.

Categories: Web

I had it with Maven

April 6, 2015 3 comments

Initially I was a big Maven fan. It was the best build tool. When coding in C I was using make and after switching to Java, naturally Ant was next.

But Maven was so much better. It had a few things that you cannot do anything but love them. First of all, dependency management. To get rid of downloading the jars, include them in a lib folder, and even the bigger pain of updating them … wow … to get rid of all of these was a breeze.

Maven had also a standard build lifecycle and you can download a project and start a build without knowing anything about it. You could have done this in Ant, but there projects should have followed a convention, which wasn’t always the case. In all honesty, it wasn’t even existing one:), at least a written formal one.

And then Maven came with a standard folder structure. If you pickup a new project it’s clearly easier to understand it and find what you’re looking for.

And … that’s about it. I would like to say that the fact that uses XML was also a good thing. XML is powerful because it can be easily understood by both humans and computers (read programs). But no other tool, except Maven, was interested in understanding its POM. And if for describing and formalizing something, like a workflow, XML is great, using it for imperative tasks – not so. Ant was doing this and going through an Ant build wasn’t the easiest task of all.

Maven was also known for its verbosity. If you go to mvnrepository.com and take any package in there you’ll clearly see the most verbose one:

<dependency>
    <groupId>org.thymeleaf</groupId>
    <artifactId>thymeleaf</artifactId>
    <version>2.1.4.RELEASE</version>
</dependency>

as compared to Gradle for example

'org.thymeleaf:thymeleaf:2.1.4.RELEASE'

.

And that’s for adding just one dependency. If you have 30 or more in your project, which for a web application is not uncommon, you end up with a looot of code …

And then it comes the customization. It is very good that Maven comes with a standardized lifecycle, but if it’s not enough (and usually it isn’t) it’s very hard to customize it. To do something really useful you’ll need to write a plugin. I know that there is the exec plugin, but it has some serious drawbacks and the most important one is that it cannot do incremental builds. You can simply run a command, but you cannot do it if only some destination files are outdated compared to their corresponding sources.

So, I needed something else. I looked a little bit over some existing build tools and none of them seemed very appealing, but I ended up switching to Gradle. Two reasons: Spring is using it (I’m a big fan of many Spring projects) and I wanted to also get acquainted with Groovy.

While switching a web application project, which took me 3 days, I ended up with a 10kb build file instead of 36Kb and many added features. I was able to build my CSS and JS using Compass/Sass and browserify and more importantly incrementally (but as a whole).

I was also able to better customize my generated Eclipse project, including specify derived folders. As a side note, for improved Gradle support in Eclipse you need to install Gradle IDE from update site http://dist.springsource.com/release/TOOLS/update/e4.4/ and miminum required version is 3.6.3 – see here why. You may need to uncheck Contact all update sites during install to find required software if you get an installation error.

Gradle is probably not the dream build tool, it has a very loose syntax, it combines descriptive properties with imperative task reducing readability, but it’s less verbose and much more flexible than Maven. Probably with some coding standards applied on top, it could become a good choice.

Categories: Software Tags: , ,

Smart laser tag

February 8, 2015 Leave a comment

Laser tag is still a niche market, but it caught up a little bit in the last years. There are few things before becoming mainstream.

Even though you can play it virtually anywhere, either in the office or in the woods, it is mostly spread in the arenas. And I think the problem is the equipment. It is bulky, hard to set up and most of the times needs a central computer to work at its full potential.

Of course, there are outdoor versions too. They are built around a microcontroller with an uploaded software. Here the problem is that we’re having limited computing capabilities and as a consequence limited features too. Lately, on Kickstarter, there are projects, where the guns are built using an Arduino-like board and sometimes even a touch screen. Extended capabilities and features.

Another issue are the sensors. A laser tag system is practically made of a gun and a set of sensors. The gun using an infrared LED emits a ray which, if it is intercepted by a sensor, it is considered a hit. When the “life” points are completely drained, the gun works no more. Sensors are usually small and mounted on vests, headbands or guns. Why I say that this is an issue? First of all, if it they’re mounted on the gun, this is not too realistic, right? And also the sensors are directly connected to a gun. Again not too realistic – you cannot have two guns, you cannot pick up a killed enemy gun. So it will be nice if you can disconnect the gun from the sensors.

And when it comes to guns you have a limited amount of models. As the systems are incompatible, building many gun models is not feasible.

Another thing is the price of such a system. The cheapest will usually go over $100 a piece.

My idea is that a very realistic, loaded with features and cheap laser tag system can be built. To address the issue of the microcontroller (and its limited computing capability) and the price, the idea of using a smartphone instead is obvious. Nowadays, almost everyone has a smartphone and this will greatly reduce the final cost of the system (we will not count the smartphone price, as you already have it).

Then we can easily address the gun-sensor connection. A gun will have a Bluetooth module, a microcontroller (much smaller) and an infrared LED. And a battery, lens and two buttons: trigger and connect. The connect button allows to connect a gun to a Practically a gun will transmit through the LED whatever it receives from the smartphone through the Bluetooth receiver. There is still a microcontroller, but it is “thinner”, smaller and cheaper.

This way a gun can be handled by more than one player, of course, not at the same time. And the gun can be built as a module that can be mounted as a scope on any existing guns, like an airsoft gun. So you also have a wide selection and even blow-back guns.

And the sensors, mounted on the vest, headband or both, will connect directly to the smartphone and will be the ones to control the “life” points. It can connect to the smartphone through Bluetooth or USB, but then the smartphone must support USB OTG (which is becoming more and more frequent). And this will be solving another issue: battery. I know that the smartphone usually has a smaller battery life, but external power banks are now cheaper and common and many will most probably already have one. For convenience you can even have the smartphone on your favorite armband. So the sensor vest, maybe connected with a headband, will be just a network of infrared receiver with different circuits if you want different weight of hits. This way the system allows head shots for the most skilful ones. The vest can be connected only to one smartphone and if the “life” points are drained, then the app will declare the player dead who will be able to transmit to the gun.

Now going back to the laser tag system as a smartphone app. What new features does this bring?

Practically there’s no smartphone without GPS. Now imagine that you can have a map on which you can see the position of your troops (and even their “life” points status), you can launch virtual airstrikes and even set up virtual proximity mines (with friendly fire or not) . It does bring new dimensions to the game, isn’t it? And if the smartphone is actually a Google Glass-like device, augmented reality will open new opportunities. You can have this on a smartphone too, but, if not mounted somehow on the gun, it is harder to handle to actually view the augmented reality..

New games can be easily invented, like capture the hill or capture the flag (virtual flags). If you pass a certain area, you capture a flag and a notification can be sent to everyone. Even games like escorting prisoners are easy within reach.

Customization now becomes the sweetest part of such a system. Beside the “life” points, you can have armour. Or doctors. Or prisoners. Or you can even do virtual drop-off that can be picked up by players by just going in the exact area. They can contain drugs (life points), armour, shells.

And to have it all, if you build such a system in a plug-in-able way, then anyone can develop new guns, games or any other feature and share them with the entire community. Imagine that you use PhoneGap and everyone can develop a plugin using JavaScript. A web laser combat game – just wow!

Now to talk a little bit about the price, production price I mean.web laser

Laser tag is still a niche market, but it caught up a little bit in the last years. There are few things before becoming mainstream.

Even though you can play it virtually anywhere, either in the office or in the woods, it is mostly spread in the arenas. And I think the problem is the equipment. It is bulky, hard to set up and most of the times needs a central computer to work at its full potential.

Of course, there are outdoor versions too. They are built around a microcontroller with an uploaded software. Here the problem is that we’re having limited computing capabilities and as a consequence limited features too. Lately, on Kickstarter, there are projects, where the guns are built using an Arduino-like board and sometimes even a touch screen. Extended capabilities and features.

Another issue are the sensors. A laser tag system is practically made of a gun and a set of sensors. The gun using an infrared LED emits a ray which, if it is intercepted by a sensor, it is considered a hit. When the “life” points are completely drained, the gun works no more. Sensors are usually small and mounted on vests, headbands or guns. Why I say that this is an issue? First of all, if it they’re mounted on the gun, this is not too realistic, right? And also the sensors are directly connected to a gun. Again not too realistic – you cannot have two guns, you cannot pick up a killed enemy gun. So it will be nice if you can disconnect the gun from the sensors.

And when it comes to guns you have a limited amount of models. As the systems are incompatible, building many gun models is not feasible.

Another thing is the price of such a system. The cheapest will usually go over $100 a piece.

My idea is that a very realistic, loaded with features and cheap laser tag system can be built. To address the issue of the microcontroller (and its limited computing capability) and the price, the idea of using a smartphone instead is obvious. Nowadays, almost everyone has a smartphone and this will greatly reduce the final cost of the system (we will not count the smartphone price, as you already have it).

Then we can easily address the gun-sensor connection. A gun will have a Bluetooth module, a microcontroller (much smaller) and an infrared LED. And a battery, lens and two buttons: trigger and connect. The connect button allows to connect a gun to a Practically a gun will transmit through the LED whatever it receives from the smartphone through the Bluetooth receiver. There is still a microcontroller, but it is “thinner”, smaller and cheaper.

This way a gun can be handled by more than one player, of course, not at the same time. And the gun can be built as a module that can be mounted as a scope on any existing guns, like an airsoft gun. So you also have a wide selection and even blow-back guns.

And the sensors, mounted on the vest, headband or both, will connect directly to the smartphone and will be the ones to control the “life” points. It can connect to the smartphone through Bluetooth or USB, but then the smartphone must support USB OTG (which is becoming more and more frequent). And this will be solving another issue: battery. I know that the smartphone usually has a smaller battery life, but external power banks are now cheaper and common and many will most probably already have one. For convenience you can even have the smartphone on your favorite armband. So the sensor vest, maybe connected with a headband, will be just a network of infrared receiver with different circuits if you want different weight of hits. This way the system allows head shots for the most skilful ones. The vest can be connected only to one smartphone and if the “life” points are drained, then the app will declare the player dead who will be able to transmit to the gun.

Now going back to the laser tag system as a smartphone app. What new features does this bring?

Practically there’s no smartphone without GPS. Now imagine that you can have a map on which you can see the position of your troops (and even their “life” points status), you can launch virtual airstrikes and even set up virtual proximity mines (with friendly fire or not) . It does bring new dimensions to the game, isn’t it? And if the smartphone is actually a Google Glass-like device, augmented reality will open new opportunities. You can have this on a smartphone too, but, if not mounted somehow on the gun, it is harder to handle to actually view the augmented reality..

New games can be easily invented, like capture the hill or capture the flag (virtual flags). If you pass a certain area, you capture a flag and a notification can be sent to everyone. Even games like escorting prisoners are easy within reach.

Customization now becomes the sweetest part of such a system. Beside the “life” points, you can have armour. Or doctors. Or prisoners. Or you can even do virtual drop-off that can be picked up by players by just going in the exact area. They can contain drugs (life points), armour, shells.

And to have it all, if you build such a system in a plug-in-able way, then anyone can develop new guns, games or any other feature and share them with the entire community. Imagine that you use PhoneGap and everyone can develop a plugin using JavaScript. A web laser combat game – just wow!

Now to talk a little bit about the price, production price I mean. The gun module has a Bluetooth chip, a microchip, an infrared LED, a lens, a battery enclosure and two buttons. The parts are all probably under $40. The sensor vest and headband have infrared receivers, microchip and USB connector. All together under $40. So you have a full featured laser combat system, easy extendable through software plug-ins for under $100. But this without counting the smartphone, power bank and software. But if the latter would be a community open-source effort … Maybe soon …

Categories: Ideas Tags: ,

WebRTC saga

February 5, 2015 1 comment

Recently I started using WebRTC. Cool technology. You can build a web chat in just tens of lines of code, you can take pictures with your webcam and using canvas you can manipulate them. And these are just a few examples of what WebRTC can do for you and your web app.

Tens of lines of code indeed. But not easy lines at all. Even though WebRTC will enable peer-to-peer communication, you still need some server components. These server components are involved in the session initiation.

First there is a ICE/STUN/TURN server that it’s used for a client to discover its public IP address if it is located behind a NAT. Depending on your requirements could not be necessary to build/deploy your own server, but use an already public (and free) existing one – here‘s a list. You can also deploy an open source one like Stuntman.

Then it comes the signaling part, used by two clients to negotiate and start a WebRTC session. There is no standard here and you have a few options.

You can use an XMPP server with a Jingle extension. Developing your own XMPP server (or component) just for this will be an overkill, so you should definitely consider using an existing one. But even installing, configuring and integrating an existing one could be too much if you don’t use it for anything else. But if you already have one in your infrastructure, you could hook up into it. On the client side, you can use an existing XMPP JavaScript library.

You can also use SIP, a protocol much more encountered for VoIP. Like XMPP, SIP it is too much to be used just for WebRTC signaling. If you already have it in your infrastructure, then could be easy to use it. As for SIP, in Java you have two development options.

The high level one is SIP servlets with its most notable implementation, Mobicents. If you develop a non-GPL application, Mobicents commercial license could be quite expensive just for WebRTC signaling.Mobicents is developed on top of Tomcat or JBoss, so, depending on your environment, this could also be a drawback.

The lower level option for SIP is JAIN-SIP, on which even Mobicents is developed on. It is completely free. There you have different protocol options, like TCP, UDP or websockets. And for WebRTC the latter will be more appropriate.

With both options, you’ll still have to develop the server logic and SIP is a pretty cumbersome protocol so prepare yourself for a few headaches.

If you decide for SIP, on the client side, things could be a little bit brighter. There are already JavaScript SIP signaling solutions that you can easily integrate into your web applications. I looked into JsSIP and SIPJS and I ended up using the latter. SIPJS is actually forked from JsSIP, but it encapsulates the intricacies of the protocol better, which makes it a little bit easier to integrate.

Another option is to develop your own signaling protocol using something like websockets. Then you have to develop from scratch both the client and the server side part. Developing the server side part, in my opinion, will be easier than the previous options. You have to practically develop a messaging system, that forwards messages between users, without caring too much about the content. @ServerEndpoint abstracts development in an elegant manner and you can even use your existing authentication system as you can bind a websocket session with the HTTP one.

If you’re worried about websockets browser support, don’t. Wherever WebRTC is supported, Web Sockets are too.

In the client side, things will be a little bit more complicated, as you will have to develop the entire message flow, the server will just forward your messages to the appropriate peer. This entire flow should be asynchronous, something like a request-response paradigm.

The development difficulty arises here mainly from the different behavior on various browsers. And by various, I mean Firefox and Chrome, the main ones supporting WebRTC.

When it comes to WebRTC you don’t have to code anything for actually streaming, but only for initiating the session. Theoretically, WebRTC session initiation process is as follows. Let’s suppose A wants to talk B. And for the sake of simplicity we will use A signals B with the meaning that A sends a message to B through the signaling channel.
1. A will create and initialize an RTCPeerConnection. Few things involved here: a list of addresses of the ICE/TURN/STUN servers, a local stream to share with B.
The list of ICE servers is passed in the constructor, the local stream is obtained through navigator.getUserMedia() and added afterwards.
2. Using that connection A creates a SDP(session description protocol) offer.
3. A signals B the offer
3. When B receives the SDP offer from A (or even before) will create the RTCPeerConnection like in step 1.
4. B will set the remote session description like RTCPeerConnection.setRemoteDescription(new RTCSessionDescription(SDPOfferFromA)), where SDPOfferFromA is the JSON object received through the signaling channel.
5. B will create a SDP answer.
6. B signals A the answer
7. When A receives the answer, sets the remote description like in step 4, using SDPAnswerFromB.
8. Asynchronously, when a new ICE candidate is discovered and presented through the onicecandidate event of RTCPeerConnection, A (or B) should signal it to B (or A). When the other party receives it, B (or A) will add it using RTCPeerConnection.addIceCandidate(new RTCIceCandidate(candidateDescriptionReceived)).

Again, theoretically. In practice …

First of all, because WebRTC is not final, but in draft state, all the types are prefixed, like mozRTCPeerConnection (in Firefox) or webkitRTCPeerConnection (in Chrome and Opera). But this can be easily fixed.

window.RTCPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;

.

Then it comes the ICE candidates part. In Firefox the ICE candidates must be gathered before creating an SDP offer or answer. So before creating an offer/answer, check RTCPeerConnection.iceGatheringState and do it only if it is "complete".

There are also few inconsistencies when it comes to the media constraints passed to navigator.getUserMedia().

In the end, I can conclude that WebRTC is really cool and you can build media application in matter of tens or hundreds of lines of code. I guess it could have been designed to be easier to use. Even though there are a lot of opinions stating that the signaling protocol is better left out of the standard, I still think that it will be better if an optional one is included. Or at least a minimal easy to implement API.

Categories: Web Tags: ,

Choosing an XMPP server

January 28, 2015 1 comment

I was working lately with WebRTC. One of the biggest issues there is the signaling part.

As an option you can choose XMPP with its Jingle extension. So, naturally I was looking into a few XMPP servers. What are the requirements that I was chasing for these?

My architecture is a Java based, so I was looking into solutions based on this technology. I know that I can integrate different technologies through things like LDAP or web services, but … If I needed something custom, it would have been much easier to develop something in Java. So as a nice to have, the final solution should be extensible through some kind of plugin system. Another requirement was to be open source or at least affordable.

Taking these into account I narrowed down the list to OpenFire, Tigase, Apache Vysper and Jerry Messenger.

Apache Vysper aims to be a modular, full featured XMPP (Jabber) server. Unfortunately, it is not out-of-the-box to feature an easy installation and integration procedure. It also seems to be in a beta stage, not production ready.

Jerry Messenger has an embedded Jetty server, it can be easily configurable and it features a plugin-able system.

OpenFire seemed the most complete solution in its field. It has a very rich web admin interface, a plugin system and extensive documentation. As a bonus, it is just a web application so you can install it in your favorite web application server along with your other web applications.

Tigase claims to be the most scalable XMPP server supporting hundreds of thousands of concurrent users. It is configurable, standalone and plugin-able. Unfortunately the documentation is not as extensive.

Clearly, my preference went towards OpenFire and Tigase. But I ended up not using XMPP at all for WebRTC signaling. Why? All about it in the next article.

Categories: Web
Follow

Get every new post delivered to your Inbox.

Join 837 other followers