Demystifying Chatbots

Image adapted from Pexel

Chatbots are hot these days. Everybody and their grandma has a bot now. Want book a flight? There’s app bot for that. Customer support? There’s a bot. (Not to be confused with the Elite Executive Customer Support Engineer TM trained and employed by certain ISPs only to have scripted conversations.) Need some psychological help? That’s right; bot.

To a software engineer, this whole bot thing seems exciting and overwhelming at the same time. When I started working on my first Google Home Action, I didn’t know where to begin. Over the course of past couple of years, I have developed a few such bot apps for different platforms – including Google Home and Amazon Alexa. While each platform comes with its own nuances, there is a common architectural pattern. In this post, I am going to draw a high level picture of this pattern.

What Is a Chatbot?

For the scope of this post, I am going to assume that any program capable of having some sort of conversation on a topic is a chatbot. So, it can be a feedback collection bot that asks the user to rate 1 to 5 on a bunch of questions or the JARVIS.

Building Blocks

Chatbot frameworks across different platforms follow a similar structure. Google and Amazon both provide infrastructure for end to end bot development. However, the similarity in architecture allows developers to swap out the default components and plug in new ones.

Let’s go through these building blocks – starting at the user interface.

User Interface

Main advantage of the bots over traditional apps is diversity in user interface. Chatbots can communicate over a wide variety of interfaces like text, voice, etc. This enables the developers to deliver their chat apps across wide variety of platforms with minimal effort. For example, same backend can power the bots on Facebook Messenger, Google Home, Alexa and WeChat.

This is great for the users since they can just open a new chat on a platform of their choice instead of downloading another app.

Overall, the interfaces can be categorized in following groups:

  • Point and Click: This is the simplest. The bot asks a question and provides a bunch of options to choose from. Depending on the user’s choice, next question is asked. Domino’s has one such bot.
  • Free Text: This type of bot allows the user to type free text messages. This adds a layer of difficulty of understanding intent behind the user’s message. Plenty of websites serve their FAQs as a chat bot.
  • Voice: These are the bots that you actually talk to. Siri, Alexa, Cortana, Google Assistant – you’ve probably met one of these. This is essentially a free text interface with Speech-to-Text on top of it.

Language Processor

NLP is the magic sauce that powers a chat bot. Natural language processor is responsible for converting the unstructured human text to a structure that can be consumed programmatically. NLP has two major responsibilities, namely intent classification and entity extraction.

In case of Alexa, NLP is provided by built in Alexa Skills Kit while Google recommends Dialogflow. These services do a little more than just NLP, but that is for later.

Let’s take an example question:

What was by balance in savings account last month?

Intent Classification

Each message from the user needs to be classified into one of the predefined intents. The intent tells us whether the user said “hello”, asked for account balance, or thanked the bot.

The example above should be classified under awesomely named intent “ask_account_balance”

Entity Extraction

Entities are the parameters embedded in the user’s utterance. The example above has two entities.

  • Account type: savings
  • Time: last month

So, a natural language processor will take our question and give structured output. Of course, the exact structure will vary according to the NLP system used.

Now that we have a structure, it is time to give some answers.

Fulfillment Service

This is probably the simplest part. A fulfillment service takes the structure above and returns an answer that the bot can utter.

In our example above, maybe we query the database and form an utterance: “Your account balance for savings account was ₹345 on 30th of June. You’re broke!” (That last part is not strictly necessary.)

For most household chat bots, fulfillment service is just an HTTP REST service. Google Cloud Functions, AWS Lambda, Azure Functions, etc. are great candidates for hosting such a service

Multi-Turn Conversations

A full blown conversation requires much more information than can be provided by the current message. Multi-level information sources are used to construct the conversation context.

Persistent Storage

The information that remains static across conversations is obtained from sources like databases. User details, user preferences, etc. are good examples since they need to be persisted across sessions.

Session Storage

Some information is transient and needs to be remembered only in the current session. For example, when the user says, “Let’s talk about my savings account” the bot needs to remember that account type is “savings” for rest of the conversation.

Such information is maintained within the session using a “bag of slots”. A bag of slots is just a dictionary of slot names and values. While the structure may differ between frameworks, they all provide a way to store and retrieve values from session level storage.

That was a birds eye view of how a bot works. I have deliberately kept a lot of details out of this post in order to keep it simple. We will look at those in the coming posts.

Introducing Pychkari: Dependency Injector for Python

I was in market for a simple Python library for my dependency injection needs. I had a very basic criteria.

  1. Python 3 support. (Duh!)
  2. It should be as simple as possible to understand and use.
  3. It should work with my existing code base without significant refactoring and redesigning.
  4. Minimal dependencies. Better if there are none.
  5. There shouldn’t be a lock in. Should I choose to move to a different library/framework in future, it should be a seamless process.

Unfortunately, I couldn’t find anything that fit this criteria. While some options do come close, most of them require the classes to be annotated, decorated, etc. That violates points 3 and 4 right away.

After writing hacky in-project implementations for a while, I decided to roll my own library. Enter Pychkari!

What’s with the Name

What is in the name!

Anyway, Pychkari is a pun on Pichkari, which is Marathi (and Hindi) word for water guns used on the festival of Holi. Pichkari, in turn, is basically a huge injection. So, Pychkari, you see, is a huge injection for Python.

Note: I have been made aware that there’s a similar sounding word in Tamil that has a meaning along lines of ‘beggar’. This is not intended. Any resemblance is purely coincidental.

Now that the joke in the name has been explained beyond the point of humor, let me move on to the technical bits.

Installation and Usage

I suggest visiting GitHub repository or PyPi page for the always up-to-date guides, but here’s a quick overview.



I am pleased to say that the goals were met without a lot of effort. And then some.

Constructor Based Injection: This is arguably the most common type of DI used. Pychkari fully relies on it. Constructor based DI ensures that the services are fully constructed with their dependencies. No more dependencies resolving to None.

Convention Based Resolution: Pychkari resolves dependencies based on dependency names. Provided that you are already using proper naming conventions (if not, why the hell not?) Pychkari will magically link services with constructor arguments.

For example, if constructor argument is called http_client the service registered as HttpClient will automatically be injected. No need for weird class decorations.

Python 3 Support: Pychkari is written in Python 3 for Python 3. It should support anything above Python 3.4.

Ease of Use: Pychkari has two step operation. Register and consume. What could be simpler than this!

Code Compatibility: If you wrote your code keeping DI in mind, chances are that you are already using constructor based DI. At this point, all a developer needs to do is to ensure that constructor argument names properly match the registered service names.

For example, to inject an instance of service HttpClient either name your constructor parameter http_client (or httpClient – we support multiple conventions) or annotate it with type annotations like so: any_arg_name: HttpClient.

Since no class decorations are needed, Pychkari works out of the box with the existing code base.

Small Size: The Pychkari wheel is well under 20 kB. Zipped source is barely 5 kB. And there are no dependencies.

No Commitment: Since we don’t have to make any code changes in service classes for using Pychkari, we are free to move to any other framework as and when needed.


All the development happens at the Pychkari’s GitHub repository. A CI/CD pipeline has been set up using Travis CI.

Stop by, leave a star and say hello!

Alexa Meets Reddit

Alien Browser for Reddit

When I do not go to office, I spend about 25% of my total web time on Reddit. I spend a lot more than that when I do go to office. (I only browse r/programming, I swear!) I also come from a long line of people that can barely see without glasses. It was high time I did something about it. And so, an idea was formed.

The idea started out as a way to access Reddit without looking at a screen. Various options were explored. However, we live in the golden era of Voice User Interfaces or VUIs. With explosion of Google Homes and Amazon Echos, VUIs are inescapable. After a month of crazy, here we are with Alien Browser for Reddit.

The Core Concept

The basic idea is to enable Reddit browsing over voice. I had a vague idea in mind for subreddits like AskReddit, ELI5, TIFU, etc. Such “textual” subreddits are great contenders for a VUI. Alexa is an excellent storyteller.

Comments are easy enough to handle. For the most part, comments are just… text. Add the commenter’s name to the speech and you have near complete comment browsing experience.

I had a minimum viable product going based on textual subreddits and comments.

A Dash of Personality

Reddit users are (mostly) people and people have personality. There is no reason every commenter should sound the same. Moreover, it is important to distinguish comment’s content from the Alexa’s narration.

Amazon Polly to the rescue. Amazon Polly is a service that enables text to speech synthesis in a bunch of voices. These voices have a distinct dialect, pitch, loudness, etc. Alien Browser randomly assigns a voice to a comment and narrates the content in that voice. This gives an impression that the commenter is actually speaking to the listener.

The playlist above has two tracks: a Question and an Answer. Notice how both the voices sound different.

The Meme Challenge

Memes are love. Memes are life. What is Reddit without its memes? Heck, what is life without memetics? Alas, memes are images and Alien Browser could only narrate the textual information. This presents a challenge.

Fortunately, we also have power of AI to extract information out of images. With sufficient training, we can extract text out of a meme – on the fly! Theoretically, it is also possible to extract enough info to match meme with one of the templates. Lucky for us, AWS provides all of this in a handy API via service called Amazon Rekognition. It is recognition, but with a k – get it?

Get it?

Now, Alien Browser can not only narrate text, but also tell the listener what a meme says. As a pleasant side effect, other subreddits like r/GetMotivated work with extra awesomeness. (What kind of a pretentious snob chooses motivation over depression memes though?)

What’s more! We also get a card inside the app with the image embedded in it.

Or maybe, the socks are just really soft shoes!

The News Problem

Used judiciously, Reddit can be an excellent source of news. Reddit also provides a great platform for general reactions and discussions around a news item. Unfortunately, media is biased towards the bad news and that is not good for our health. It is known. There is a lot of positive news that barely gets reported. Even if reported, it is lost in the avalanche of media.

What if we could focus only on happy (or neutral, even) news? With the power of AI infused sentiment analysis, this is a reality. You can tell Alien Browser to “only tell the happy news” and it will filter out all the negative news for that session. Again, the sentiment analysis is powered by Amazon Comprehend – an AWS offering.

Looking Ahead

It has been a great experience making something accessibility oriented. It presented challenges I hadn’t faced before. There are minor kinks to be ironed out, bugs to be squashed. There are a few UX improvements in pipeline too.

In the mean time, go ahead and give it a try! Here’s a handy link.

Alien Browser for Reddit!

Sennheiser HD 4.50 BTNC Review


I have been looking at a lot of wireless headsets lately. They offer a lot of convenience for a nominal compromise in sound quality (and a huge burning hole in wallet). And since I’ve already gotten my hands on a pair of Sennheiser HD 4.50 BTNC (what a mouthful), I might as well write a review.

Note: that this is one of those reviews where I don’t follow any formal structure. There won’t be lab tests conducted in controlled conditions. If you’re looking for graphs labelled with phrases like “frequency response” and “interference” you should probably visit one of the professional review sites.

While you’re at it, you should also reflect upon why you aren’t invited at parties. (Because you’re a spoilsport nerd!)

On a serious note, this review is written from an average user’s perspective – me. It has the details about daily usage, comfort, etc.


First Impression

The headset is packaged in a surprisingly simple cardboard coloured box. It lives up to its claim of frustration free environment friendly packaging. Inside the box is the headset itself, a charging cable, an aux cable with 3.5mm TRS pin on both the ends, and a carrying case.

The headset has typical Sennheiser build quality. The headband and arms have Sennheiser logo printed in silver. The right cup has a bunch of buttons and ports around its rim. Main microphone is located on this rim as well. Outer cover of the right cup has a small grill that hosts the NFC chip) used for active noise cancellation. Like most other models from Sennheiser, the three elevated dots mark the left earcup.

The headset itself folds into a small package. The hinges have nice satisfying clicks. Headband has a soft rubber finish on the inside. The earpads have a soft squishy feel.


The Technology

Besides regular Bluetooth features, HD 4.50 has a few extra bells and whistles viz. aptX, NFC, Active Noise Cancelling. All these features seem to deliver on the promise for the most part.

Fast pairing on Bluetooth 4.1 works as expected. This is more important than it seems at first glance. Refer to the handy flowchart in my previous post to know why.

The aptX CODEC

The makers of aptX CODEC claim that it can provide “CD like” audio quality over Bluetooth. While this may not be strictly true (at least to my ears) aptX does improve things significantly. In my tests with a bunch of lossless files, aptX made a noticeable difference in quality. As an aside, most of the Macs released in last few years automatically use aptX when supported.

However great the aptX is, it is no match for the good old cable. The same audio sounds better when you switch to the cable.

Remote Control

The right earcup has a nice volume rocker with a convenient bump on the volume up button. Press and hold for both the buttons serves as toggle for Active Noise Cancellation. While pressing both the buttons feels awkward at first, it is pretty easy once you get used to it.

The playback controls are handled by a single click and slide switch. Pressing the switch does play/pause (or receive/end call) while sliding the switch triggers previous/next. Surprisingly, this button press does not have a click to it. There’s also a perceptible lag between button press and play/pause operation. The lack of distinct click and the lag leave me wondering if I have successfully pressed the button or it needs a harder push. This is confusing. Some sort of feedback on button press (like feeling of a click) would be really helpful.

Lastly, there’s a power button. Press and hold triggers power on/off. Thankfully, this button has a click feedback. Holding the button pressed after turning the headset turns on triggers the pairing mode. The LED indicator starts blinking in red-blue to indicate this. The red-blue LED blinky mode can also be used if you wish to very subtly pretend to be the police. I advise strictly against this.


The audio capture is simply awesome. Plenty of wireless headsets (cough TAGG cough) suffer from faraway voice. This is especially common since the mic tends to be located near the ear, and hence, away from the mouth. Sennheiser seems to have managed to get around this and provide a decent audio capture.

Sennheiser briefly hints that there are multiple microphones to capture user audio. That probably helps too.

Active Noise Cancellation

Sennheiser claims that it uses in-house active noise cancellation technology dubbed NoiseGardTM. While it is not clear how the implementation differs from others like Bose, it works satisfactorily.

The technology cancels constant ambient noises like fans, aeroplanes, street traffic, etc. very successfully. It is not very good at cancelling out irregular noises – like ambient conversations, your housekeeper testing exactly how ‘unbreakable’ your fancy kitchenware is or that annoying co-worker you’re trying to avoid. I can rest assured that I will hear it when my mom yells at me.

Spy Mode

Yup, you read that right. There’s a spy mode. Hear me out and follow the master:

  1. Turn on the noise cancellation.
  2. Put the headset in call mode. This mode is triggered whenever mic is needed. You may trigger this by visiting the ‘input’ tab in audio preferences. Works on Windows as well as OS X. On phone, you’ll need to disable ‘media audio’ and only keep ‘phone audio’ enabled. This can be done from Bluetooth settings for your headset.
  3. ???
  4. Profit!

After doing this, I was able to hear the conversations louder and clearer than otherwise. What happens (I think) is that noise cancellation silences all the background noise but captures conversational audio and plays it right inside your earcups. This has many benefits.

  1. You can hear clearer since there’s no ambient noise.
  2. Audio is amplified and played right near the ears increasing comprehensibility.
  3. People think you can’t hear them and talk about stuff they don’t want you to know.

So, next time your uncle starts plotting planning your wedding behind your back, you can start doing other arrangements – like picking songs and guests.

I suggest The Rains of Castamere and a bunch of assassins for a perfect Red Wedding Theme. This has added advantage of deterring rest of your extended family from making similar attempts.

Sennheiser Send Their Regards

Battery Life

It is enormous. Battery lasts about three days of regular use. Charging time is decent. You could potentially use the headset using aux cable while it is charging, but I would suggest against it unless you like the thrill of bunch of pressure packed chemicals getting hot near your head.



As a person gifted with elephant’s ears and an eyesight of a deep-water cavefish, comfort is a huge factor for me while selecting headphones.

Sennheiser BTNC HD 4.50 is a closed back circumaural model and covers the ears completely. Earpads are soft and do not hurt the ears. I experienced very little discomfort despite having protruding ears and spectacles. This is rare and not to be taken lightly.

The earpads form a nice seal around ears to isolate noise. As a side effect, ears tend to get warm after a while. While this is a common problem with all the circumaural models, the pleather used for earpads stays comfortable. I did not have an urge to yank the headphones out and sink my ears in icy water despite long hours of use in a relatively warm climate.

The headband has just the right amount of grip. It provides sufficient sealing pressure without making my head feel like it is being crushed by a neutron star. The headband does, however, significantly alter the hairstyle. I recommend a look in the mirror before heading out.


Sound Quality

The headset manages an excellent sound despite being wireless. Bass and treble are well balanced. There’s no specific thump or boom to bass. There isn’t anything very distinct about treble either. So, if you prefer bass heavy music, you might want to consider other options.


Sennheiser recommends their app – CapTunes – to tune equalizers according to preferences. However, this only applies for the audio played through that device. So, if you use multiple devices including desktops, laptops, tablets and phones, you’re out of luck.


The drivers and somewhat quieter than I expected. I constantly find myself keeping volume on the higher side. While that is not a major issue and drivers do produce loud enough sound, it is something worth keeping in mind if you prefer extremely loud audio. In my experience, raising volume a bit does the trick just fine. Sennheiser may have done this in response to earlier complaints of loud defaults. This makes me wonder if this is something done at firmware level and can be patched away.

There is no cracking or rattling even at extremely loud volume levels.

Soundstage and Speech

Having full sized drivers is a big advantage when it comes to soundstage. The sound feels natural and wide as opposed to cramped. This also makes speech crisp clear. The combination of large enough soundstage and active noise cancelling makes this headset a perfect tool for listening to articles and audio books.


Final Thoughts

Sennheiser HD 4.50 BTNC is a great all-rounder. It has great sound quality, battery life, and decent noise cancelling. Audio controls are easy to locate and intuitive to use. At ₹14,990 it strikes a good balance between cost and performance.

I have always held a view that ditching wires is a lifestyle change. And if you’re ready to embrace it, Sennheiser HD 4.50 BTNC (really, what a mouthful) is a great companion.

Netflix Can Be The Last Thing Required To Push India Against Net Neutrality

There are rumors about Netflix launching its service in India in a few weeks. This is a great news. India will (hopefully) have legal access to the content that was mostly out of reach until the creators chose to ship the DVDs. The rumor also says that Netflix is likely to partner up with ‘some 4G providers’ to ‘make faster streaming possible for their service’. As long as this partnership is about marketing and has nothing to do with how data is prioritized by the ISPs, it’s great. But if it’s not, then we’re in a big trouble.

The rumor has it that Netflix will partner up with ISPs so that data consumed while watching stuff on Netflix won’t be counted towards data cap. In other words, free unlimited Netflix. Sounds great, right? Wrong. This is bad. This is very bad. Let me explain!

This violates net neutrality

I know, I know. It is cool thing to do these days to slam everything and shout ‘net neutrality’. Hear me out. There are real problems here. If Netflix does this, then it will end up having monopoly over streaming scene in India. In a country where smooth stream at 240p is a luxury, you’d be left with two choices: Netflix at 720p/1080p unlimited vs YouTube, etc. at 240p/360p. Choice is obvious. All the other services are out of picture.

“But hey, that’s really not my problem. As a user all I care about is content. Other services can die for all I care.”

Agreed. But as a user, there are multiple content providers I’m interested in. I do like Netflix. But I also want to watch stuff on YouTube, Vimeo, and what not. So, even as a user, we should still care about other services.

It’s bad for start-up culture

YouTube and Vimeo are big companies. They can put their money into similar deals with ISPs and get into fast lane. As a user that solves my problems. However, I am also an independent developer. Let’s say I were to launch a start-up in this domain, I’m totally out of luck. I am pretty sure start-ups don’t have money to make shady deals with ISPs to put their content in special category. In a country where start-up people are worshiped as legends, I don’t think any more explanation is required. This deal could totally tear apart those beloved legends.

We are still likely to support Netflix

Facebook announced Free Basics and India rallied against it. (Or so I hope.) However, it is much less likely to happen in case of Netflix. The people that oppose Free Basics are the people mostly unaffected by it. The opposing group has nothing to gain or lose (in immediate future) with Free Basics. The very same people will be largely benefited by free Netflix. It is incredibly more difficult to say no to unlimited Netflix than to Free Basics.

Facebook’s target audience is class of society that barely understands the concept of internet, let alone the neutral internet. In case of Netflix, however, the target is middle class and upper middle class that sees Netflix as a luxury too difficult to ignore. This is a difficult conundrum to deal with. On one hand, we have neutral internet. On the other hand, we have hard earned bandwidth and FUP, beyond which every byte costs a kidney. Many more people are likely to support Netflix than did Facebook. Arguments are already being made in favor of Netflix deal.

One common argument is that it is necessary for progress. Here’s how it goes: “If you want progress you have to let Netflix provide fast lanes. If we oppose it we will be stuck with slow internet and bad service forever.” But this is not how any of it works! Yes, faster streaming is good for progress. But that has nothing to do with fast lanes. Faster internet infrastructure will allow faster streaming. And fast lanes or not, infrastructure will have to be built anyway. So, no! We don’t need special Netflix-internet. What we truly need is fast internet.

Another argument is that it will reduce piracy. Well, maybe for the content that is available on Netflix India. (Netflix is known to restrict content geographically.) But again, that has got nothing to do with Netflix being faster than other services. With all other factors equal, faster internet for everything will have same effect on piracy as faster internet for Netflix.

In other words, we have no reason to have Netflix-special internet. What we should be rallying for instead, is faster and more reliable internet experience for everything. And that, I support wholeheartedly.

Audio Technica M50x

Handy FAQs for ‘Overspending’ Nerds

A music lover listens to his music through headphones. An audiophile listens to his headphones through the music.

The philosophical sounding quote above by a random Redditor kept me from buying a decent pair of headphones for months. But a few days ago, I finally decided to bite the bullet and got myself ATH-M50X. Let me just say this: They’re worth every penny spent!

However, there was a huge gap between the day I placed the order and the day I was able to unwrap those. First, I somehow forgot to opt in for one day delivery. Then, Amazon decided to screw up and delayed the delivery. So while I was waiting, I had to face a series of questions demanding explanation for this ‘outrageous’ expenditure. In retrospect, I think I have faced similar situations many times for many things. Be it a mechanical keyboard, a notebook riser, a good router or any other not-so-conventional purchase, I find myself answering the same questions over and over again. So, here’s the list of those frequently (literally) asked questions.

Whoa, how rich are you?

I’m not. But that beside the point. I just spend money on different things. Moreover, a good product is a long term investment. A good pair of headphones lasts years (or even decades) while a cheap one starts to fall apart after moderate usage of a few months. Same goes for keyboards, mice, routers, etc. Moreover, all these things make significant amount of difference in comfort level. If you plan to use something for long hours every day for years, invest some money in it. Makes sense, doesn’t it?

If you’re still not convinced, here’s a handy table my friend put together.

Parameter Fancy Dress, Watch, etc. Keyboards, Headphones, etc.
Time spent using That awkward one hour at cousin’s wedding. Evey day, all day.
Comfort level Ask someone whose wedding Saree weighed 30kgs. Significantly comfortable. Give notebook raisers a try.
Maintenance costs “I can’t wear my wedding dress because the stonework scratched my wrist watch and the set is ruined.” Almost negligible. Even keyboard keys are replaceable.
Value over time “Oh God! There’s mold in my Shalu.” (Hey mom!) Almost constant.

Why are they so expensive? Are they big?

No. The earcups are about as big as circumaural cups should be. The headband is about as big as… a head. (Seriously, what were you expecting?) But they comfortable. Way more comfortable then any cheap pair could ever be.

Are they very loud?

What, headphones? No, not that loud! The cost is supposed to be (not saying is, just supposed to be) measure of quality. And quality has very little to do with loudness. If loudness increased proportionally with the cost, my head would have been blown off by now.

Mechanical keyboards, on the other hand, are loud. Sometimes the noise they create is actually proportional to their cost. So, there’s that!

But didn’t you already have a pair of headphones?

Yup. Not just one. I have a bunch of those. But you probably had a wardrobe full of shirts (or more pairs of footwear then you care to admit or whatever else you’re into – I try not to judge) and that doesn’t stop you from buying more! What’s your point? Different pairs of headphones sound different. Different keyboards feel different. Some are substantially better than others.

At some point we tend to look beyond the primary function – which is to produce sound (and to cover up skin in case of clothes or to expose skin in case of some other clothes – clothes are complicated) – and start looking at how well it is performed. In case of clothing, it’s about look and feel. In case of headphones, it is sound signature, comfort and other heavy words audiophiles used to sound like a snob.

You could get a pair of Beats at that price. Why put money on some secondary brand?

I guess I could get Beats. I could also burn my money in a barbecue grill and in this case, it wouldn’t be much different! On a less snarky note, Audio Technica and Sennheiser are not secondary brands. Most definitely not secondary to Beats. Beats are bad. Real bad. I have made that mistake once. (Hey, I was young and was offered more money than my brain could handle. No judging, please!) If you’re still not convinced, Lifehacker has put up a nice rundown on how Beats perform against other models that are available at way lower price point.

By the way, similar answer goes for why I didn’t by an iPhone instead on HTC HD2. While iPhones do have real good hardware and software, the ecosystem is not something I personally enjoy.

Sony is good. You could get a pair at a way lower price point.

I could. I did. A few years ago. I still use it. Yes, it is good. Not as good as this one. I’m fairly certain that comparable pair from Sony costs as much as these do. But thanks for the advice. I think I’m going to stick to the advice from my other friend though. See, he is a professional sound engineer and has got an actual degree in acoustics.

Okay, but what do you actually hear different, if anything at all?

It is difficult to explain. I could probably say something about balance, levels and soundstage, but in all due honesty, those are just numbers. As another wise Redditor once said, it is more about feeling than hearing. It feels different. Feelings, by their nature, are tough to put into words. I’ll still give it a try though.

Let me be honest first. I have done some double blind tests with my sound engineer friends and I can not tell the difference between a decent pair and a really good pair 100 percent of the times. As long as music and headphones are sufficiently good, I’m fine. But then there are those times when I can hear a phone ringing… on vibrate mode… in another room… across the hall. On those occasions, the ‘good enough’ music gear leaves a lot to be desired. I start noticing all the quirks. Sometimes, it’s the muddled bass, sometimes it’s the rattling. It takes all the focus away from music as well as whatever I’m doing and I’m left with the kind of frustration that you get when your socks are twisted ever so slightly – just enough to annoy you, but not quite enough to make you take off the shoes and straighten them.

The M50x really shines in this case. First of all, it’s comfortable. The earpads actually go around the ears instead of sitting on top and pressing the earlobes down until it hurts. Secondly, the sound is awesome. It’s immersive. I’ve paired it with a FiiO E10K and only thing more immersive I’ve found so far is the reality. (Yes, nerds and geeks are aware of the world outside. Quite vaguely so, but aware nonetheless.) I’ve been continuously using these for last week and so far they haven’t failed me. No more twisted socks.

If you have made it so far, thank you! I hope I’ll simply be able to direct people to this post next time the situation arises. Feel free to reuse/adapt this list to your needs.

Taming Windows 10 ListView for Good

I have been playing around with the UWP model that Windows 10 introduced. Recently, I needed to display an album of images in vertically scrolling views. Age old well known ListBox is a one obvious choice in the case. However, ListView in Windows 10 (and Windows 8.1, as far as I remember) comes with a host of new features and performance improvements like virtualization, instantly making it a better choice for displaying heavy content like images. In my (admittedly limited) testing, ListBox gave a small flicker as new images were rendered while ListView was smooth.

The Problem

However, the ListView also comes with bunch of fancy animations (such as tilt on press/click) that prove oddly disorienting when applied on large items. Time to get rid of those! What can a person do but fire up Blend, Select the ListView > Edit Additional Templates > Edit ItemContainerStyle > Edit a Copy. Now, natural course of action would be to find visual states like pressed, selected and remove the animations. But where are those states! This is all we get in the template:

I have removed all the other parts, but as can be seen, there are no usual Grid elements and visual states. There’s just a ListViewItemPresenter. That doesn’t really allow you to change appearance and behaviors.

The Solution

Turns out, Windows 10 ships with two different ItemContainerStyles for ListView. It’s the second style – ListViewItemExpanded as they call it – that we are interested in. It ships with UIElement tree and visual states instead of a ListViewItemPresenter. Here it is, in all its glory:

Now, we can disable visual states and customize templates at our heart’s content. The whole thing has been documented here at MSDN. Hope that helps!

Tips on Contributing to Projects

I have been involved in open source projects for quite a while now. While my projects haven’t seen as much following as some of the big fish they have had fair share of contributors. More often than not the contributor is a newbie full of enthusiasm and virtually zero experience with version control systems. Here are a few tips that may prove helpful to someone trying to dip there toes in open source software movement. While the tips are meant for beginners, a seasoned developer new to FOSS should find them helpful too.

Find something you can relate to

Before you even think of contributing to any project at all, find something that you truly care about. While it is tempting to pick a popular project to get involved in (and it sure does look good on your résumé) I would advise strictly against starting with it. When you start working on a project you that you cannot relate to, the enthusiasm fades away pretty rapidly and you are left with an uninteresting task on your to-do list. Best case scenario, you stick to it and complete whatever your started with grit. Worst case, you leave it and a ghost fork becomes a part of your life (okay, your GitHub profile) forever. In both the scenarios, you end up with sore taste in your mouth and decide never to go back to contributing. This is bad. The world needs more contributors.

Instead, choose something that you use regularly. Find something that you think could be done better. It can be as simple as repositioning a button in that grocery app you use or fixing a spelling mistake. As long as you care about that feature, go for it. When you’re done, you will have sense of accomplishment every time you use that feature. This keeps you motivated when you move to bigger things.


So you have picked a project and decided to do something. It’s very tempting to fork the repo and jump to coding. Hold on! First thing you should do is to announce your plan to the maintainers and members of the project. As a maintainer, last thing I want is two contributors independently working on the same feature. Not only this wastes precious time and development effort on part of contributors, but it also puts the maintainer in position of choosing one contributor’s work over another. Secondly, project owners may have something planned that is different from what you are about to do. As much as they hate turning down a contribution, it is necessary to keep project on path.

All of this can be avoided by communicating early on. If the project has issue tracker (all GitHub repos have it) create an issue and write your plans down for everyone to see. If the project has an accessible mailing list, announce what you plan to do there. Use an IRC channel if it is available. If you are unsure whether the feature you are planning to implement falls in roadmap of the project, discuss it with the owners in detail before you start working. Maybe there is a reason why that button was placed there.

Don’t expect to be spoonfed

Contrary to what your academics might have taught you, most of the projects are not documented per class level. Most projects may (and do) have something like a quick start guide that… gets you started (duh). But form there, you are on your own. And that is not a bad thing. I’d rather prefer developers spending their time writing good, self documenting code than creating redundant documentation.

No, I will not make pretty class diagrams for the whole project. I won’t walk you through every API my services expose either. You know why? Because I think it’s pretty obvious what a class called ConnectionManager does.

Stick to the style

Every good project has its style. Naming conventions, coding patterns, even commenting styles are followed throughout the codebase. This is what makes it easy to understand without any documentation. Need to find out where a UI related constant is defined? Obviously in UIUtils class. Without this consistency, the whole project starts rolling down the spiral of technical debt.

When you start making changes to the code, study the project’s styles carefully. It is okay to write messy code for experimenting. But before you ask the project owner to merge your changes, make sure that you have refactored code to stick to styles. It saves everyone’s time.

Commit early commit often

I cannot stress this enough. Divide your code changes into small, easy to digest commits. While there is not rule regarding how frequently you should create commits, it’s ideal to create a commit when you finish something. For further reading, refer to this excellent post by Jeff Atwood.

While this list doesn’t cover everything, it should be a good starting point for people new to FOSS world. Got something to add? Let me know via comments.

A Song of Plots and Deaths

Warning: This post contains big spoilers from Marvel’s Daredevil, Marvel’s Agents of S.H.I.E.L.D. and Game of Thrones. In case you haven’t watched all the episodes until time of this writing, I suggest you hold on to reading this until you’ve caught up.

So the fifth season of Game of Thrones ended a few days ago. As expected, many died and many cried. Killing people seems be in trend these days. As the end of season approaches, writers start killing characters. Heck, they kill people when plot starts getting a bit boring. That’s why S.H.I.E.L.D. showed Jiaying getting killed, Daredevil killed Wesley and Game of Thrones killed, well, everyone. This plot device of killing characters is an important weapon in an author’s arsenal. It has a guarantee of intense effect. However, the weapon is getting more and more blunt and Game of Thrones is to blame. So many characters unexpectedly disappearing seems to have desensitized my senses. When Wesley died out of nowhere, I was shocked. Jiaying made me flinch a bit. For Jon Snow, I just went meh. Only reason? It’s Game of Thrones! Every time a character comes up on screen, my brain assigns some probability to its death. When a character comes on screen in last 10 minutes of the last episode of the season, that probability shoots very high.

Don’t get me wrong. Many deaths in the series were well planned and, I daresay, necessary for the plot. Some others, not so much. They merely contributed to the process of desensitization. Now, the death of Jon Snow has brought the story to a peculiar point.

If he stays dead, there is no one left to ride the dragons and lead everyone against the Winter. The fan story of him having secret real identity and belonging to some noble house doesn’t come true. He can’t team up with the Dragon Queen either – you know, being dead and all. On the other hand, if he is resurrected, the series has literally fallen to the level of Bollywood TV where characters die just for the sake of it and are brought back by some voodoo. In this case, the death adds absolutely nothing to the story and the series hits rock bottom in terms of quality.

It is going to be interesting to see how author manages to get this story out of ditch. Meanwhile, let the fan stories roll!

State of Windows Phone

Warning: This is a rant. You have been warned!

While building app packages for Kodi Assist, I accidentally ticked a small checkbox that resulted in app not being allowed on SD card. This update went live in the Store and I was completely oblivious to the folly I had committed. A couple of days later, I went on to check reviews and lo and behold – single star. And not just one isolated incident. There were many. Apparently, people really hate it when they are not allowed to install an app to SD card. One of the reviews went like this:

Not installable to SD card? No stars! Previous version was OK but I ban every app that prevents installation to SD card.

The app is merely less that 1 MB in size. I don’t suppose it makes a lot of difference as to where it is installed. Moreover, I am easily reachable via many communication channels like Twitter and Reddit. Had this been brought to my attention, I could have avoided the bad rating. But ban? Ban? Just because of a tiny error? Yet, needless to say, I quickly pushed another update that allowed installations to the SD card and replied to the review. I am yet to hear back.

In another incident, I was requested to enable some sort of haptic feedback on button presses on remote. It actually makes is easier to operate the remote without looking at the phone screen. Admittedly, this feature is subjective and some people may find it annoying. Apparently, people do find it annoying. While I had plans to add a setting to disable this feature, I could not finish it in time and update was pushed out that forced everyone to put up with vibrating remote. I knew some users are going to be unhappy about this. Sure enough, I got an email very next day that started out like this:

Hi, I am disappointed with new addition to remote…

Accepted. Guilty as charged. But my crime was to have forced a feature, not failed the faith of humanity! The option to disable vibrations is being added as we speak.

Two small incidents, but they shed light on a bigger issue. Allow me put things in perspective here. For the most part, the app is handled by a single person. (Although, I have had help here and there) The app is free. All the source code is free. It doesn’t have ads. It fits under 1 MB. If most of the ratings and reviews are any indication, it’s a quality app. And yet, one small mistake, one mistaken keystroke and pitchforks get pulled out. This is depressing for me as a developer. Not only that but It is bad for the platform as a whole, especially when there aren’t many Windows Phone developers out there.

With that said, not all users are alike. I have had opportunity to interact with pretty awesome user base Windows Phone (or any other platform, for that matter) has to offer. I have had words of praise and encouragement from all over the globe. And as long as there are those users, development will continue.