(I’m talking about a bunch of different hardware here and as such, I’ve chosen to link the specific devices in question. These are things I bought, and have used, and I’m linking these as a reference in case people are curious)

I’ve been working on fixing a few annoyances I’ve had with my home network.There were a few areas that weren’t covered, most of my systems were only accessible over wireless, which in a large apartment block, takes a lot of tweaking to get right and I had bandwidth between the wrong systems. I also had issues with my trusty old DD-WRT infused WRT 54GL,(Which inexplicably stopped working wirelessly) and switched it for a shiny newish Asus RT N56U – It is an older model, but with no 5GHZ gear, a AC router would be overkill. Having fixed up my WRT54GL (did a 30 30 30 reset), I figured I’d tackle the dead zone in my apartment. Putting in proper ethernet cabling isn’t an option for now, so I ended up getting a pair of homeplug AV 200 mbps adaptors. – I picked up a pair of HL113es from Aztech. They worked fine for testing initially, but on hindsight, the miniature models arn’t that useful. I ended up going with a Homeplug AV 500 4 port gig-e switch (also from Aztech) and a 500 mbps TP-Link unit. I guess there’s a few odd things I came across here, that are worth considering if you’re looking at adding a few homeplug AV units.

I had a few issues getting it to work, mainly cause of some of the oddities for the standard -and how its managed

Firstly, homeplug AV is a black box. This is perfectly fine when it works. When it doesn’t though, there’s almost no diagnostic options. You have the blinky signal light. Its green when everything is perfect, orange when its meh, and red when its horrible. When it dosen’t work, it *should* be off, but I had a system where it would be solid green, and go on and off every 10 seconds or so. This is not in the manual. This is not in *any* manual. This tended to happen when the dryer was on, on my gig-e switch, when I was using one of the 200mbps adaptors in a specific location. I can change the network name using a utility, but not check the network name I am using now. I can talk to these units, but they don’t *talk back*. This makes diagnostics when things don’t work a pain in the rear.

And when I say ‘any’ manual, Its because homeplug units seem to have the same type of chips, with the same, reskinned utilities. Don’t bother installing multiple utilities – find one thats reliable (Aztech has different versions for different models which are of varying reliability. I’ve tried 4-5 different utilities for science and they all work the same. I like the tplink varient, the power packet utility, or the Aztech homeplug utility 5.11 – Aztech’s 4.0 series is rubbish). They’ll work together as long as you don’t mix legacy 15 and 85 mbps units with newer ones.

These manuals are also useless. I realise that as a standard with multiple implementations there may be minor differences. However, I can’t find a definitive answer to what happens when I mix 200 and 500 gear – some say it depends on how the traffic flows, and a 500mbps and another identical unit will talk at 500 mnps, and they will drop down to the speed of the slower unit if its between dissimilar units, and other say the whole network will run at the lower speed. I’ve found there seems to be a speedup from using only the faster units, but the units never seem to test as well as they rate. I’ll do more tests as I upgrade the network to something that can cover my needs with entirely 500mbps units

I went with mini plugs cause they were cheaper and I wasn’t sure if it was going to work. The passthrough units arn’t that much more expensive, don’t take up a socket, and filter any device plugged into the socket. Since powerline networks are sensitive to line noise, and switch mode power supplies are noisy, it makes a LOT of sense to use one before a powerstrip where you’re connecting your main network to a powerline network. Don’t bother with mini units – just spend the 10 or so dollars more and get a passthrough unit

Location is everything. The issues I had with the dryer went away when I moved the homeplug unit to a socket further from the dryer. If you are in an electrically noisy environment with big electric motors, you may want to consider isolators or simply moving the homeplug units. I’d test moving units one by one since with the ‘ring’ topology homeplug has, you may have unusual effects from noise – I had a nearer unit have issues, while a further unit still worked, yet that move improved signal quality throughout the network.

Homeplug units come with a default network name. They assume you don’t change them (which is unacceptable for me), or set a random network name. I ended up using the utility to set the network name to a specific one, since the button press method dosen’t actually give me any feedback to whether they are working. Its actually easier for me to do this than to use the ‘simple’ one button system where you need to press the button for *exactly* 1 second or 3 seconds or…. You get the idea.

Once I got it working, and sorted through the noise issues, its pretty decent. With careful rearrangement of my homeplug units, I managed to get a strong, stable connection, Its reasonably fast (I’m seeing about 40mbps each way on lan speed test with the current arrangement, while my old arrangement was ~ 2 downstream – and 4 upstream). I’ve even got it to stop cutting out whenever the dryer runs. I currently have 3 nodes – one at my router (You probably don’t want 2 there, might cause the router to loop in on itself and cause all reality to self destruct).

On the whole, I rather have ethernet, and you almost always have wifi. Homeplug’s a pretty good way for reaching places that don’t have either. Complaints about documentation aside, its a pretty useful addition to a network. I’d suggest getting a starter kit for the cheapest hardware per unit you can find and testing, since you’re rarely going to hit theoratical speeds. Better ‘classes’ do seem faster.

 

After yet another internet disconnection (at random) on my HTC one V, and needing to reboot it, I ended up deciding to get a new phone. I wish I could say I made an informed decision based on my needs and requirements

I didn’t want to pay too much for it (since I’d be buying it off contract), I was buying it off contract since I had a grandfathered 3g plan that gave me a whopping 12gb of data that I hardly used. It wouldn’t need 3g. I wanted something that wasn’t loaded up with unremovable applications from telcos from *every country in the region* . I sort of liked the size of the one V but the chin made it arkward on the pocket.

I wanted something fast, stable, and not likely to decide to suddenly, and oddly decide that it couldn’t find a DNS server.

I’d decided to get the moto G anyway, and well, it ticked all these boxes. I wouldn’t say its entirely stock – it has a camera application that took a while to get used to (and is one place where I’d miss the one V for a while), but there’s no silly application from telekom indoglossia cluttering it up. The biggest tradeoff, I suppose would be the SD card. I’ve tended to put the biggest microsd card I could ‘in case I needed it’. On my tablet, its usually proven quite useful since I occasionally watch movies on it. Even on the One V, I never really ran out of space. Considering I have wifi through the local free wifi network, and more network bandwidth than I can use, I’ll survive. If I need a load of space, there’s always a USB OTG adaptor and a USB key, no? My retailer threw in a flip-front case. Apparently they didn’t have black, so I went with blue. It replaces the back cover, and is an integral part of the phone, which is a lot slicker than a third party case.

I’ve always been of the opinion that <4 inch screens were the ‘right’ size. I used to have those tiny tiny candy bar nokias, and considered them the ‘perfect’ size. On the one hand, I found that typing with the screen on a keyboard that size was a pain. On the other hand, an excessively large screen meant you’d need both hands to operate the phone. I kind of like the 4.5 inch screen on the Moto, since its large enough for someone with reasonably large hands to use one handed (or as I call it, pawfriendly), yet pocketable.

The moto has a lot of good reviews and with good reason. While long delayed, the moto is one heck of a polished, fast and usable device that probably punches above its cost-bracket. Pity it’s the last generation of google/moto phones with the lenovo acquisition, but lenovo takes a while before they start mucking with things from my experience.
As for my One V? I suspected that the problem was with the firmware, and HTC dosen’t seem terribly inclined to update it. I got it unlocked, installed clockworkmod recovery, then went over to HTC dev and experimented with a few roms. The Pomega rom seems to work (other than the camera app), and another one didn’t work wirelessly. While the phone bit works, I ended up switching it over to  flight mode, and turning on wireless.

Its a lot faster feeling and cleaner, so it was probably worth it. Its still a decent spare phone.

This pair of headphones and I have a long, convoluted history. I bought it on the recommendation of two of my friends who have the non TI model (and they bought it based off my research when I was looking at replacing my ATH M50). Lets start with the trivial stuff.

 

This alone is epic. BEST WARRANTY CARD EVER.

 

The headphones came in a nice fabric case, with 2 different headphone cups (leather and velor), a headphone cable that connected to either cup and the headphones, in a nice shaped foam insert. Pretty awesome *usable* packaging.

 

 

 

 

 

This is how it was packaged. The other cups are below and are pretty easy to swap out. Embarassingly, I haven’t tried them yet.

 

 

 

 

Build quality is nice and solid – the cable is easily replaceable (and apparently there’s clones of this with different cable lengths) and it feels sturdy, and comfortable on my head (but I don’t have a monster noggin).

Sound quality is excellent. Sound quality is also highly subjective I guess. I’m not one of those folk who can wax lyrical about a pair of headphones but there’s a few things I can say. Its crystal clear, has good, tight bass, and I occasionally hear something off these phones and assume its someone behind me. I’d initially assumed my laptop onboard sound card struggled to power it but a few experiments later, I realised it just sounded ‘off’ compared to my better gear – the FA003 is SLIGHTLY unforgiving of totally crap soundcards – but unless you’ve got some badly designed one, or something off a laptop that dosen’t sound all that great anyway, its a non issue. I do regularly use it for non audiophile things like gaming and youtubery off my desktop. I generally pair it up with a desktop amp, and a external DAW for music, and run it off the amp and my onboard soundcard for everything else. These work *great* with a nice amp and sound card, and they don’t need to be terribly expensive.

One final note – I special ordered it. The other two guys I know who have it got it online. Its a pain in the ass to get – unless you can find someone to shut up and take your money

I used to be a google reader user before they shuttered. I then switched to the old reader – they had one hell of a site, but had scaling issues, the poor dears, and decided that they couldn’t handle us refugees. Lets be honest here. I could move again, but at this point of time, I have trust issues. I want a reasonable amount of control, the option to move to another server, taking my feeds with me, and the knowledge that I won’t have to switch providers for reasons I can’t control.

The alternative thats currently the best loved, and maintained is ttrss. While it will run on a standard LAMP stack, apparently it works better on postgres – so I figured I’d go with ubuntu, lighttpd, postgres and php. I also decided to document the whole process. The web stack setup is based off howtoforge’s guide on setting up a lighttpd/myql/php stack  I’ve modified the instructions to use the things I use, and streamlined it a bit. I *do* assume you’re building off a barebones VM – you’ll need to adjust the instructions if you’re running or planning to run something like Apache or Ngnix.

Firstly, the packages you’ll need

sudo apt-get install lighttpd php5-fpm php5 php5-pgsql php5-curl php5-cli postgresql

Lets break this up into what these packages are for. You have your web server, and the varient of php its running – in this case lighttpd and php5-fpm, with php5 being a prerequisite for pgp. You have the postgres related packages postgresql and php5-pgsql, and finally you have php5-curl and php5-cli – the latter is a suggested prerequisite for ttrss and the latter is needed to run the update script for ttrss. Naturally, if you’re running another web browser, switch the appropriate group of packages for alternatives.

The default configuration for lighttpd assumes you use spawnfgi rather than fpm for php. You will need to make two changes to the config files to make sure things work fine. These are identical to what is done on howtoforge’s tutorial.

First, you need to enable cgi.fix_pathinfo in /etc/php5/fpm/php.ini. To do this, open up /etc/php5/fpm/php.ini with your favourite editor, find a line that says

;cgi.fix_pathinfo

and remove the semicolon to enable it.

Next, to set up php-fpm

sudo su
cd /etc/lighttpd/conf-available/
mv 15-fastcgi-php.conf 15-fastcgi-php-spawnfcgi.conf
nano 15-fastcgi-php.conf

Then paste in – for ubuntu 12.04. Annoyingly this will fail horribly for ubuntu 13.04 and you need to look up the appropriate, sockets based alternative here

The rest of the instructions are identical, so meh, do your homework ;p

# /usr/share/doc/lighttpd-doc/fastcgi.txt.gz
# http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi

## Start an FastCGI server for php (needs the php5-cgi package)
fastcgi.server += ( ".php" =>
        ((
                "host" => "127.0.0.1",
                "port" => "9000",
                "broken-scriptfilename" => "enable"
        ))
)

Now that the configuration is done, we need to enable our new configurations and reload lighttpd and php-fpm

lighttpd-enable-mod fastcgi fastcgi-php 
/etc/init.d/lighttpd force-reload
/etc/init.d/php5-fpm force-reload

at this point, our environment should be ready for the install to start. A good way to check is to create a php information script, and take a peek

Create a file in /var/www/ called info.php and paste in

<?php
phpinfo();
?

Go to your webserver and check if it works – if it does, check for entries saying ‘pgsql’. If they are there, we’re ready to create the database.

sudo -u postgres psql postgres

should throw you into the postgres shell
It looks something like

postgres-#

You need to create a new user

postgres-# CREATE ROLE ttrss WITH LOGIN ENCRYPTED PASSWORD 'password' CREATEDB;

then create a new database with the new user as an owner

postgres-# CREATE DATABASE ttrss WITH OWNER ttrss;

and just to play it safe

postgres-# GRANT ALL PRIVILEGES ON DATABASE ttrss TO ttrss;

Download and unpack ttrss from http://tt-rss.org/redmine/projects/tt-rss/wiki
I tend to do this using sudo wget url, since I have had little luck doing it any other way. Probably not best practice. You’ll likely also want to change the folder name to something friendlier, and change ownership to www-data (so you can automatically have ttrss set the config file it generates)

At this point, go to the website, set some sane defaults and get cracking – you can pretty much coast through the rest of this.

Upgraded my VPS (which this runs on) to a newer ubuntu LTS – this was suprisingly simple. My database is hosted elsewhere so that was a non issue (I need to do a test of the migration process at some point). As such all I needed to do was copy /var/www and /etc/ (though I didn’t copy over everything), dump a copy of my installed packages (which i ended up not using ;p) reinstalled, and copied over what was needed – in my case web server configs and pretty much nothing else.

 

Suprisingly painless!

I’ve never been a fan of the oldschool linux way of seperating parts of the file system – keeping /home might make sense, but I generally went with a single, large drive. Ironically, on windows, I’ve chosen to go for a non traditional file hierarchy of boot+applications, storage and transient files on seperate drives, and I find it works very well.

Systems accumulate garbage. Desktops fill up with icons. Downloads are left uncleared until space is critical. I’ve more than once found ancient cryptic files that thankfully arn’t digital copies of some old evil tome. While ideally, avoiding clutter is good, I’ve chosen to segment the clutter out of the way.

Traditionally I’ve gone for a boot drive of reasonable size, and a large storage drive. This gives me a place to do backups on the system itself, and to some extent, keep clutter out of the main drive – I would tend to download files directly to the data drive.

I recently had my laptop HDD replaced for bad clusters and a reformat fixed it. I figured I’d run it to failure, and then replace it. Having a seperate drive for downloads is *brilliant*. I have seperate directories by source – firefox or torrents for example, and can then move it over to the storage drive as needed. This kind of lets me actually organise stuff as it goes into the storage drive. Sure, I could have a sub-directory, but lets admit it, I just tend to pile stuff up one on top of another.

There’s a few realizations I made – firstly, that a large quantity of the data we work with is transient in nature – assignments for example, are only really needed for term time (and can be archived or deleted later). Software installers are used once, then deleted. Seperating out transient storage makes sense!. The second is that having per application ‘download’ folders helps declutter things a lot. Things I download from firefox often has a shorter useful life and often are smaller than stuff on a torrent. I only have a seperate disk because I had one spare – anything from a folder of folders to a partition makes sense. Its not just a technical fix – it creates a seperate ‘bin’ for files like this, and you’re more likely to keep things neat.

YMMV – I only run 4 disks on my desktop cause I’m a packrat, and ended up accumulating disks – from my standard 1+1, then added a seperate disk for linux since I didn’t know if it would play well with windows 8, and a 4th disk off my laptop. It works VERY well, but naturally not everyone would have spare disks or space to mount them.

 

 

We just replaced nearly all the lights here with LED bulbs – turns out that they make drop in replacements for the circular florescent lamps we’ve used for ages, and swapping them out was pretty simple.

We went with a company called gopro (yes, its ungooglable) distributed by a local company called song cho – we ran into a local showroom my accident, tested one, and decided to go with them. The big advantage here was that we didn’t have to change our current fittings, and it was pretty simple

Next time I make a major lighting change I really need to take pictures and brightness readings for science. They are a wee bot expensive at 40-50 dollars a bulb (though we’ll be looking closely at our power bills to see if there’s a difference), but god are they bright . Then again, we went for the highest colour temperature, and indians love it bright anyway.

Ease of installation wise, the hardest part was removing the old, rusted/corroded out old ballasts, and the crumbling connectors from the old bulbs – you replace the ballast with a supplied LED driver ‘puck’, replace the old bulb with a new one (which has the same form factor) and you’re done.

The ‘tubes’ themselves are a round PCB with lots of little, massively bright LEDs, with ventilation slots at the side of the ‘rube’ facing the fixture. The light is a lot brighter, and I guess being DC driven, they should flicker less.

The past term, I’ve been doing Knowledge Management Techniques, and Knowledge and Organizational Learning learning at school. Its been pretty interesting for two reasons. Firstly its the exact same basic ideas, but seen from different viewpoints. KMT is IT based and focuses more heavily on the processes – the *explicit* knowledge side of things, probably cause most IT guys tend to sooner or later do things from experience more than from documentation. KOL is from the management side and tends to be about sucking up that tacit knowledge and converting it to explicit knowledge, so focuses quite heavily on people.

I suppose that all that thinking/doing got me off my ass to get this site up, considering I’ve had the VPS it runs on for over a year, and the domain for about six months, but the other thing was that I’ve been more focused on other things.

When starting on a project, its always useful to have some idea of what its about. This isn’t going to be a talk about my life blog. This is going to be about technology, and my interface with it. I want to talk about keyboards, and mice, or new software from a pragmatic user viewpoint. I’m more interested in how it fits than massive technical discussions on how mouse A does perfect doughnuts, while mouse B dosen’t.

I want to be half as awesome as tested ;p.

Hello World.

The blog needs a cooler name, but this will do for now. Theme needs tweaks, some various changes need to be done, but looks like we’re up.