While it “works” – there’s various minor issues, and iterations of my bias lighting setup.  More brightness is always nice, and I’d like to combine the ability to power it off my PC, with a few additional features. I’d like to turn it off. I went with a cheap, commodity remote controlled relay – a MTDZ007. and setting it up so each monitor’s bias lighting can be controlled individually.

Basically there’s a standardish female barrel connector that shipped with my LED strip connected to the power input to the relay, on the left. The 6 terminals on top are 2 pairs of “NO, Common and NC” so you want to bridge the positive (or negative) power input to that. I chose to jumper the power input to one common connector, and connected a second jumper wire between the two common cables. All three of the other polarity (in my case, negative) are connected to the power input. The positive “leg” of a pair of JST style connectors connects to each NO terminal to complete the circuit. As such, I can now connect and disconnect the LED arrays for trouble shooting purposes (unlike the earlier soldered together setup).

The jumper header switches between momentary (say if you want to turn on a PC with a remote control), flip flop (presumably for forward reverse or lighting up one sign or another) or independant control.

For power input, I simply bought a molex to fan connector and connected it to a male screw on barrel terminal – I may convert something else, maybe a sata to something else cable in future, and this is neater than my current molex to hacked up floppy power cable set up.

as for the actual lights – I’m upgrading from a 3528 to 5050s for more brightness, and its straightforward otherwise. There’s *almost* no soldering required (unless you need to splice a longer cable to your JST style connectors) .

 

And

 

 

I’ll entirely admit this is *partially* inspired by this MSE post. And a certain desire to move off purely “This is a neat thing that I did” that this blog has been (even if I need to do an updated ttrss install guide).

And well, I’m not a marketing guy. I’m also not your typical consumer of , well, the internet. I still use RSS to aggregate news sources, pulling in, skimming through article summaries and most of what I read is stuff that interests me – I love long form, thoughtful informative writing. I want *stories*.

What I don’t want is lists. I mean, yeah, I’m sure its an easy way to knock out a dozen articles. Its the *second* easiest way to get an article out, short of making a summary of another article. But… lists are lazy. I’m tempted to write “5 reasons why listy posts are killing writing”, but that’s not what you’re here for, right? (Why are you here?)

There’s probably a few nice things about list type posts for content creators (and aggregators). You don’t need to think about the structure of writing and the flow. You basically go

Title(link?)

Paragraph in the most vague sense no one actually reads.”Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est canis cavem.”

Another Title(second link?)

Paragraph in the most vague sense no one actually reads.”Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est canis cavem.”

and so on.

At best, you have bite sized, relatively fast to consume bits of information – at worst, you basically have people following links till they find what really interests them, and their engagement with you is gone. On the other hand, there’s no depth, no real “addition of value” for the reader. I realise, writing longform, thoughtful pieces is hard. I did it a lot as a navel gazing 20 year old (most of that’s probably lost in the sands of time, only to be found on the internet archive), but finding something to write about, organising the things, and actually sitting down and writing something coherent is *hard*.  On the other hand, good writing keeps people coming back.

There’s a place for these things, I suppose. Lots of the internet relies on lots of little pieces of content all the time – and clearly someone thinks clickbait works. Sure does for say, buzzfeed  and all those meme sites that rely on 30 second reads. It is probably different for companies that provide a service, or worse yet actually are a source of content.

So… to me, lists are a reason the internet kinda sucks.

 

Writing init scripts is a pain in the rear. Upstart was better but badly documented. And for all the fruit thrown at it, systemd isn’t that bad when it comes to the end user experience. I’ve been using supervisord to run the update daemon for ttrss. I’ve upgraded from 14.10 to 16.10, and supervisord broke. After spending much too long on the update, and getting saved by a friend who knows his way around a debugger, I felt it was the perfect time to scrap that and run it on systemd.

I started off by looking at a script someone else wrote. It has a few nice things, but lets see, I use a different version of php (potentially!), postgres and so on. Its a good starting point. I also looked at the official docs for RHEL7 . I figured starting after the database and networking made a ton of sense. I had to find the name of the postgres unit, the path to php(I run php7.0), and include my own path to ttrss. Restarting on failure would be nice, and while I didn’t quite grok that on the official docs,the  U&L stack has a lovely question on this.

 

Putting all this together

Fire up

sudo nano /etc/systemd/system/ttrss.service

and lets build us a systemd unit.

[Unit]
Description=ttrss_update
After=network.target
After=postgresql@9.5-main.service

[Service]
ExecStart=/usr/bin/php /var/www/ttrss/update_daemon2.php
User=www-data
Type=simple
Restart=Always
RestartSec=5

[Install]
WantedBy=default.target

I had to find a few of the variables – I used systemctl status to find the name of the postgres service. I built the ExecStart line using which php7.0 to find the location of the file.

I want this starting as www-data(makes permissions simpler), and its set to always restart, and if it fails, to restart in 5 seconds.

The [install] block is default. Seems to work fine

And there you have it, a systemd unit for any ttrss setup you care to modify it for.

 

System level backups are pretty much something everyone *needs* to do. While windows has had a bit of a mixed history on this (The current system imaging tool on windows 10 is from windows 7), there’s excellent third party tools to do this.

I’m not really going to go into massive detail on these since these tools seem to be evolving quickly, and are pretty self explainatory. Veem endpoint backup is my current go to backup software. I run one backup a day, it does incrementals and automatically prunes excessively old backups. It *has* failed on occasion on my oldest active system (with 1gb of ram) so I would consider this unsuitable for such systems (I use macrium reflect free on those. Its not as elegant or simple, but it covers the gaps veem does). You can restore to a smaller drive with either tool – (I did my SSD migrations by restoring backups) by manually setting partition sizes.

Veem’s simple, elegant, and idiotproof in the 90% of situations where it works. As always *test* your backups, preferably to another drive *BEFORE* you need them.

It also only backs up to one location. One is none. I also like to have copies of music and other files that would be a pain to re-convert, but these do not need to be part of my boot/system image. I use bvckup2 for this. I used to run the beta (which was free) and bought a licence during a periodic sale. Its *awesome*.  I use this to sync a 100+gigabyte folder of music, a few hundred terabytes of backups (stored on a nas, and synced back to my desktop) so it handles big files, lots of small files and network shares great

 

 

 

 

 

 

 

 

Set up the backup job here. You’d want to backup to an empty folder

 

 

 

 

 

 

 

 

 

 

 

 

Nice simple interface, shows you when the task last ran of it it failed. I use this to backup my music folder (to 2 locations), and backups (ditto). Its *insanely fast* due to delta copying and other things and backups typically happen at close to line speed.  I set this up and as long as its running it quietly and unobtrusively backs up my files to other locations. In short, it does what it says on the tin.

Writing this up cause someone was asking me about it, and its one of those things that I’m actually pretty proud of. Its however a slightly complicated setup, involving many pieces of hardware and software working together. I’ll be starting with what I expect out of my backup setup, and an overview of how its setup. Part 2  will cover the software I use in a bit more detail and why I use it.

Since I’m using ‘only’ backing up 2-3 systems and the ones that need backing up and bare metal restoration on failure run windows, I’ve chosen to run a very similar setup on them. I’ve a few basic rules in place when designing the setup which are a mix of (bitter) experience and things that have worked well for me in the past.

One is none. I’d like a minimum of *three* different backup locations for anything important. Four is nice. I’ll eventually be looking at off site storage, but for now, that’s a large hard drive on my main desktop (which also acts as a central hub for replicating backups), a network attached storage device, and a fileshare on my linux box hosting live or semi live backups, and a spare copy that’s not super up to date. In theory I can survive 2/3 backup locations going down with at most a reinstall of the OS on the desktop or linux box.

Different backups types are treated separately. Music gets file level backups since rebuilding my collection would be a pain. Windows *system drives* get imaged for quick restoration, and I keep a week’s worth of incrementals. I’ve tested my backup software for a worst case scenario (drive failure – restored to an empty drive and checked if it boots) and checked that I can do file level recovery from there.

I also never have the *primary* copy of a backup set (the copy that’s initially backed up to) where its most likely to be accessed from. Music’s updated on the desktop, but the share I usually use is on the linux box. Backups are saved to the NAS, but should my desktop SSD or my fail,  I’ll likely use the local copy for restores.

I tend to standardise on windows file shares/smb for slinging files around since I’m typically backing up from windows boxen and at least one end of my backups are on windows.

If I have a single point of failure, its my desktop, since one of its hard drives is a backup repository, and its the ‘hub’ from which I replicate backups.

The *primary* backup storage at the moment is a seagate nas. Its 3tb, entirely standalone. While I can use a password protected share, I’m using one without one since I’ve had massive headaches with the previous, fairly securely setup primary backup setup. If everything *else* goes north, that’s where my backups would end up.

Previously I was using my brix as a primary backup, running fedora (and with selinux turned on) 1TB hard drive with about half of it used for backups. For some reason my laptop would be able to write, but not *overwrite*/amend/delete files so it wasn’t a very good primary backup. Going with ubuntu or a nas centric platform might have been a better idea, but the fedora box has other uses. Its currently my third line backup, with backups sent from the main desktop – which can connect reliably to it.

I also have an external HDD which acts as an additional layer of backups, tho its not kept updated like the other copies.

More on what software I actually use next.

Until recently, I had a boring, run of the mill 1600×900 display. It worked well enough, it was a decent gaming monitor, and well, I didn’t really see anything that made me go wow.

I figured I’d go for one of the korean displays Jeff Atwood seemed so fond of eventually and maybe an inexpensive colour calibrator. Its one of those “I’ll do it eventually” things I never got around to doing. Between the bewildering array of choices, me being late to the party and missing them being *ultra* cheap (they were a bit more expensive, at around 400), it just didn’t happen.

What I did end up getting was a P2715Q. I paid somewhere north of 900 sgd (with sales tax, and a smaller discount than dell offers in the US) for it. Its a proper 4K display (with 60hz capability over DP/mDP), with an IPS panel 27″ (there’s a 24 inch version for somewhat less, but I figured that I didn’t want/need pixel density that badly.

 

This display supports single stream transport (earlier displays used multi stream transport, and split up one panel into 2 streams since 4K requires a lot of pixel pushing, even at the display level) – this means it can hit 60hz as a single display, and 30hz if daisy chained to a second display through a specific display port connector. In my case, between the onboard graphics and the nvidia GTX 660 (which I’m planning on upgrading) I have plenty of ports. This is probably more useful if you want to run two of these off a laptop for some reason.

Out of the box, the display looks a bit yellow. *Do not adjust your set yet*. Both P series monitors are properly colour calibrated our of the box, and look glorious for most things. After a bit of getting used to, I tried to calibrate my old VE220T to be a bit better… and well, Its impossible to get it calibrated with the tools in windows. This, I didn’t need to adjust at all. I did turn down brightness to 50% (Most monitors, even nice ones come too bright out of the box. IPS tends to be brighter than run of the mill displays, and even with those, turning down brightness actually makes things better). After a few days, its a *quantum* leap from my old display.

Portswise, it comes with one mini displayport and one displayport connector, one HDMI/MHL connector (which would restrict you to 30 hz) and a second full sized DP connector for daisychaining. It also comes with one usb 3 input with 4 USB 3.0 ports. All except one of these is in a recess behind the screen with the video and power in. On one hand, this keeps wiring neat. On the other, it makes the USB ports hard to access for casual use – they’d probably be more useful for things you don’t plug and unplug often. There’s one port on the back that’s vaguely reachable, but its definately no replacement for a USB hub. The ports power down when the monitor does so you can’t really use them for charging either.

Industrial design wise, there’s many complaints that dell’s conservative. Their monitors do tilting, panning, height adjustment and rotation, and look the same as they did half a decade ago cause its pretty hard to improve on it. Even the thick bezels have a purpose – to hide nice tactile buttons. The only real complaints I have are the rear usb ports, but the older models with side usb ports were somewhat less thin.

Whoever designed this monitor was pretty serious about power use. The USB ports shut down when it goes on standby. Its got a little graph showing power usage. Its got a feature that *lets you turn off the power LED when its running* which I love to death.

On youtube 4K videos, colours are lovely. There’s pretty much *no* 4k media that’s readily available (I went to take a look at my friendly local torrent site and its 99% porn… not that I was going to download anything ;p). I’m currently using madvr to upscale, but I’ve not really done comparative tests on it and the cccp defaults. Nonetheless. Video quality is *subjectively* better than it was on the Asus at most things.

My video card probably can’t handle 4k gaming. I’m running things at 2560×1440 for now and my GPU handles it fine.

So, for somewhat more than what you’d pay for a no name, A- grade korean display a few years ago, you can get a properly colour caliberated 4K display, with all the modern inputs.

It comes with an optional application that sits in your taskbar on windows and lets you set per application colour profiles (maybe useful), set brightness and contrast (which can be useful, but setting colours for custom mode would be nice), and snapping windows into up to 2×3 grids or custom grids, which can be shiny.

The good part? Its a *great* display with fewer compromises than the early generations. It does 60hz, 4K and is pretty usable out of the box for most folk

The bad? You’ll need a beefy video card to game on it, and 4K content is a little hard to find.

The ugly? Your old TN monitor. And my photography skills.

 

Update: Almost 4 months on… I discover that all the USB ports work when the monitor is on standby, which is handy. Not sure if I completely missed something that obvious, or something got updated (do monitors have firmware updates?) in the meanwhile.

 

 

 

While I’ve been happy with my previous hosting (I was using a 256mb VPS on buyvm with offloaded sql), I’ve also been running a few other services for my own use on a VM a friend let me use on his dedi. I was holding out for something reasonably cheap (I’m paying about 16 euros, or 30 sgd for this right now), and not too shitty.

I’ve got an 8 core avato, 8gb of ram, and a 1tb hdd and  paid another 2 euros a month for another IP. This blog (and a few other services) will be running on the VM, while I’m keeping the physical box for a few other things. This should let me do quick reboots of the VM box if need be, and easier backups and moves in future.  I’ve got other plans for the rest of the server.

This is pretty neat.

This blog was offline for a few weeks

I had my account suspended, my blog down, and a entirely warrented, and slightly annoyed email from my VPS host threatening to shut down my service if this happened again…. cause I was too lazy to set key based authentication.

I always figured a reasonably strong, alphanumeric password was enough, and linux was reasonably safe from viruses. An attacker would need to somehow know my password to get in (and yeah, I REALLY should have known better) and that keeping my software minimal and up to date was good enough.

Turns out I got hit by the xorbddos trojan. Lovely. It brute forced my password, injected a rootkit, and used my little, carefully built VPS to DDOS others. On one hand, I should have known better. I’ve set up good key based authentication and am pondering port knocking.

Victim blaming rarely helps, but there’s a few places where I really messed up.

Passwords arn’t good enough. I actually may redo my key based auth setup with stronger keys than what I have now. Its a pain remembering to have my keys so I need to create device specific keys, and a backup one on a USB drive or something. Key based auth is *easy*. There’s tons of good tutorials out there, and it takes less htna 5 minutes.

I didn’t have real backups – my db is elsewhere and in theory (and practice!) I could easily rebuild my wordpress instance quickly.  However, if that install *had* been compromised, well.. I’d be in trouble. Still looking for a good solution there. Pondering a periodic scripted tarball of my /var/www and/or something WP specific

My VPS was running *too* well. I’d probably have noticed if I was paying attention to it. I need to log in and look for *obvious* things like high processor usage. I noticed this when I’d logged in to get my WP install out.  In short, I need to *proactively* check on this, and not just run apt-get update every so often.

Some people suck. Seriously. However, a little patience means that they can’t ruin your day by turning your system into a one of the sources of a DDOS attack 😉

Yeah, this is primarily for my own reference in case I need to rebuild but it might be useful for someone else. The night theme in ttrss is actually *meant* for night time use, and the developer has *gradually* added support for images to be monochrome. I use it all the time, and want colour. The fix is simple – open up preferences and customise your stylesheet and copy and paste the following lines in. The important bits are the “greyscale(0)” (they’re set to 1) and this overrides the default setting.

body#ttrssMain .cdm .cdmContentInner img,
body#ttrssMain .cdm img.tinyFeedIcon,
body#ttrssMain .cdm .cdmFooter img,
body#ttrssMain #feedTree img,
body#ttrssMain .postContent img {
filter: grayscale(0);
-webkit-filter: grayscale(0);
filter: url("data:image/svg+xml;utf8, }

And it was a minor pain. Everything is supported out of the box (even the emmc that makes older kernels apparently pitch a fit) *except* the trackpad.For some reason fedora (kde?) dosen’t support touch pads out of the box, and I didn’t have a spare mouse handy. Its detected but a pain to configure without a mouse

I did the install from a liveusb to another liveusb, using the keyboard to select between install options, then realised I could fire up a launcher with alt-f2, and use that to enable tapping to click. Still working out how to do clicking to click (since the touchpad is clicky).

Wireless, bluetooth, most shortcut buttons and the like work fine. Even airplane mode (which I need to see if I can disable in linux since its the same key as my dropdown terminals!)