Tuesday, September 1, 2015

In search of a thin client that satisfies all my needs

Getting work done irrespective of the location I'm at is one of those "stretch goals" that I keep trying to figure out how to achieve.
The requirements for such a device were:
* Lightweight
* Easy to carry around (physical dimensions, etc)
* Battery life / charging capabilities
* Screen size: Should be large enough to work on
* Price
Until right now, a thin and light laptop was my go to machine - which is why I preferred Lenovo's X series laptops: X301 and then the X1 carbon.
With the Oneplus One that I have right now, I think it is almost ready to displace the X1 from being my primary go to machine.
Oneplus One pros:
* The screen size is large enough to not have to squint when I'm reading it.
* The battery life is quite good: over 12 hours.
* Miracast allows for a much larger screen as well.
* Connectivity: wifi of course, but additionally, the 4G/LTE allows for more than what the laptop has.
Software:
I realized that the software that I really need most of the time is an ssh client, openvpn and remote desktop/VNC, all of which are available on Android.
With openvpn I can connect into my home network and the ssh client allows me to log into any of my Linux VMs.
The rdp client from Microsoft can connect to the Windows VM.
I am set up for work from anywhere!
Things that I haven't figured out yet:
A keyboard mouse combo that is designed for phone like devices.
A screen that can allow me to convert this phone into a tablet size.
Maybe the Asus Zenphone is the right device then?

Thursday, August 13, 2015

Docker: Its not just about the run time.

I've started using Docker in my home setup for a side project that I tinker with.
I just finished setting up multiple build systems in which I can compile that project for x86 and x64 using docker containers.

Why?
The compilation environment before docker was a set of virtual machines that had the exact dependencies required to compile the application. To keep the environments clean, these virtual machines are not used for anything other than compilation. This is wasteful, but unavoidable ... until I learnt about Docker.
Now the dependencies required to compile the application are described in a Dockerfile and the image pushed to my in house repository. Side note: There was no need for the repository to be in-house, it was just some fun thing I was trying out.... and it worked, so why not use it?
The Dockerfile is the perfect place to document the dependencies and the uploaded image allows me to instantiate it as a container on any machine at home.

How?
I first started with a Dockerfile that would create a monolithic big image. Something like

FROM ubuntu:15.04
RUN apt-get update ; apt-get -y dist-upgrade ; apt-get -y install dependencies

This generated an image that was about a GB in size. The moment I started making the next image I realized that it would eat up a whole lot of space on my laptop. this would not do.
Dockers storage layers are based on sharing. The next step therefore was to begin sharing.
For this, I created a Dockerfile that would first be an "updated base image" of Ubuntu.
The second Dockerfile would build on top of that to create a layer that held the first set of common dependencies.
The third Dockerfile would build on top of the second to create a layer for the not-so-shareable dependencies.
This organisation allowed me to create three different final images that shared about 400-500 MB of data between themselves.
So the first Dockerfile was (named firstlevel):
FROM ubuntu:15.04
RUN apt-get update ; apt-get -y dist-upgrade ; apt-get -y install binutils vim build-essential
The second was:
FROM firstlevel
RUN apt-get update ; apt-get -y install second_dependencies
And then the third was
FROM secondlevel
RUN apt-get update ; apt-get -y install third_dependencies

And to make the creation of these images simple, I created a directory structure for each Dockerfile and a Makefile next to it and a controller Makefile that would invoke make in each sub-directory.
When I run the topmost make, builds 6 images automatically within about 10 minutes. There's very little interaction required to recreate everything.
Compare and contrast this against what I need to do with my virtual machines for a moment.
To recreate my virtual machines, I would have to install the operating system or clone it, then run the script (which I should have created first to document what the dependencies are) and then its finally ready. One build system would be close to 2-3 hours to recreate.
Instead, I have 6 Docker images ready in 10 minutes.
Cool!
Now that they're ready, its also simple to start off a compilation on any physical machine. I'm not limited to running the compilation ONLY on the virtual machine created for that purpose.
The container can be started on my laptop, my desktop, my friends laptop, any virtual machine I have or even on an Amazon EC2 instance.
Wow!
Machines ought to be made simple to use. Even for hackers like me.
Remember: Cattle, not pets.
*Wow intensifies*

Wednesday, August 5, 2015

Iterative design

Designing is hard.
Designing for performance is harder.
Designing for performance under extreme load is an art that requires a village.

In VSAN, I started off with designing a system that would work under 90% of cases and workloads.
Even before that milestone was achieved, the performance team had jumped in and shown me that the initial design was just not good enough for the 10% use case. My first thought was "Who cares? That's really not what I'm targeting".
It turns out that even though the performance folks were "not holding it right", they were doing exactly what all our customers would be doing first: Running a completely fabricated synthetic benchmark... which meant that even though it would be less useful, I would have to cater to it.
The result was me tweaking the design so that it would
1. do-no-harm to the 90% cases
2. address as best as I could the 10% cases.

At the end of this design phase and implementation, the performance had jumped by orders of magnitude for the 10% cases and had improved by a few percentage points for the 90% cases.
In addition, it had reduced "performance jitter".

Even before I had time to bask in the glory of this wave of improvements, I had already been shown the next problem: Performance under load.

This is where simple unit tests, regression tests and regular overload tests wouldn't do.
It required automation of all those tests and one crazy person who'd go stress the system beyond not only what we'd publicly advertise, but also beyond what we'd privately accept as the limits of what we've built.

The way he reported the problem was also funny enough for it to become a meme within my team: He'd start some ridiculously stressful test and go for lunch. On returning, the stress test would have failed (of course) and he'd come with a big smile and say
"I started the test and went for lunch. When I came back it had crashed. I didn't do anything else."
We considered the possibility of not letting him ever go for lunch after the third time. :D

That fellow's efforts led of course to a wave of even more improvements and bug fixes.
The final design was a piece of art. It had beautiful poetry and rough Klingon.
I could never have achieved the final design in the first shot.

Learnings:
  • There was no way the design could have been made perfect from the first day.
  • Even if I had the "perfect design" it would be almost impossible to implement it from scratch.
  • There's no way to predict the performance optimizations required without having experienced the problems first. This goes well with the philosophy of "premature optimization is the root of all evil".
  • There's no way to do all this alone.
  • Unit tests are only the beginning. Performance oriented test teams and automation are mandatory. After that is done, there has to be at least one person who will be the "chaos monkey" to find new ways to break the system.

Share it with the world

A couple of days ago I googled for how to map user VA in linux kernel mode.
It led to me to a bunch of interesting articles that helped me understand how to do what I needed to do: zero copy within the linux kernel.
One of those links is a snippet on a github project.

I've made a bunch of interesting stuff that simplifies what I do, for example automating simple repetitive tasks.
I realized that it would be awesome if I could share the things that worked for me with the rest of the world. That way, other people can either use the stuff I've created directly or use it as a reference for their own purposes.

Here's the project on github: simplify

The first entry: How to run Ruby gems on Ubuntu. Why? Because the gem's executable path isn't automatically added to the path.
Also as a typical geek, I don't want my path to be cluttered up with stuff that I don't use all the time.
Therefore, I need a script that will set up the path and run the gem that I want to run without having to go hunt for the path every time.
Seems like a simple enough problem to solve.... and it is. Look at the script.

I've added license information because it just makes to clear it up front.

Hey! Long time dude, where were you?

Over the last 5 years I got so busy with my day where I was building really interesting shit that I've lost track of my blog.
Now that I'm out of Vmware, I realize that I have a lot of time to actually do as much as I used to before it.
One of those "many things" is writing about my technical mad-hattery: my personal account of being as mad as a hacker.
The last 5 years haven't been a cakewalk: A lot of things changed and believe it or not, I lost a lot of my innate confidence and "lol-giggle-look-what-I-did" way of life.
Reading the few posts on my own blog reminded me that I was and still am capable of a metric fuckload of stupid-silly-and-yet-awesome stuff.

Its time to get back to having some fun.

Monday, January 10, 2011

QT 4.7 on Symbian

Documenting this for future reference:
1. Follow the steps on this page.
2. Pick up the QT mobility package from here, extract it, then pick up the appropriate zip (eg qt-mobility-symbian-1.1.0-epoc32-5.0.zip) from the top extracted folder. Put it into the same level as the epoc32. Extract there.
3. Copy [extract location]/features/mobility.prf.template to [NokiaSDK]/Symbian/[SDK]/qt/mkspecs/features/mobility.prf

After these steps, make sure you re-run qmake for any applications before you build.

Friday, October 22, 2010

XOrg and Nvidia card freeze

My machine used to freeze for about 13 seconds repeatedly.
Debugging:
1. Notice it happening
2. Find out regularity. Find out time of freeze - is it consistent? Yes
3. Google for the issue. While typing in the browser, realize that keyboard and mouse events are still reaching the browser.
4. Look at syslog, dmesg - find nothing
5. Look at Xorg log. Se that it has repeated logs of the type
"(WW) Oct 22 09:37:51 NVIDIA(0): WAIT (1, 6, 0x8000, 0x000016d8, 0x000018f8)"
6. Google for "Xorg nvidia wait freeze".

Found the Solution