Author Archives: AJClayton

The home network revamp continues!

It’s time for the second part of my network upgrade series.

This may be extremely geeky, and I’m yet to get to what will pass (even vaguely) as interesting in this project, however if you’re sticking with me over this journey then I’ll show my first purchases and, crucially, unveil the main event – the rack!

There’s not much more to say. Please watch the video to see what I’ve bought.

Coming next will be two supplementary videos in the series where I unbox my new rack mounted switch and my UPS. After that will be part 3 of the revamp series when things really start coming together.

My office revamp. The big cable untangle!

My home office has had more or less the same layout since I first moved into my current home in 2009. Sure, I’ve altered things here and there and slopped a new colour of paint on the walls every so often but the basic layout has pretty much stayed the same.

Until now.

As a project for the remainder of 2017, I’m going to revamp my home office and bring it in to the 21st century. I’ll be replacing the radiator and will have a bright new colour scheme, however the headline change here is the introduction of a rack cabinet.

Racks are more usually found in Data Centres and are in use there because they enable many servers to be stacked in the most space saving way possible along with tidy and organised cable and power management. I’m going to use the same philosophy for my home office. By introducing a rack I’ll be able to put the vast majority of my equipment in one space and greatly enhance my power and cable routing.

The video above talks briefly about my plans and shows the extent of the problem I’m hoping to overcome: lots of disorganised cables and equipment dotted all over my office; some – like my NAS – not in the ideal location!

I hope you’ll join me on this journey as I seek out a UPS, rack mounted switch, PDU’s and the like and re-organise my space to be much more efficient. Please subscribe to my YouTube channel to be notified when part 2 goes live and let me know what you think!

I’ve finally bought a games console

I’ve never been what you might call a “hard core gamer” (and that’s not someone who plays video games in their underwear), however for some strange reason, at the ripe old age of (nearly) 47, I’ve decided to buy my first top of the range dedicated console. Why?

When I first owned a computer in the mid-1980’s (a Sinclair ZX-81), the only thing that interested me about games was how they were written. I remember a silent, black and white three dimensional maze inside which a large collection of black pixels in the vague shape of a T-Rex would come after you. I was amazed by this first person aspect to a game and while not being all that interested in playing it, I was desperate to know how to code it.

It was things like that that got me interested in programming and here I am more than 30 years later and it’s my job. I don’t write games, but I do write code.

Because I’ve always been interested in writing code and not all that good at games, my exposure to gaming has always been quite limited. On the Commodore 64 and then the Amiga 500 I hardly ever played any. However, when I bought my first PC in the mid 1990’s I started attempting to play a few titles. Usually with no success whatsoever.

For the next few years I played Toonstruck, Total War, Frontier, Black and White and others but most of my money went on various versions of FIFA. I was blown away by the “virtual stadium” in FIFA 96. I can remember standing in GAME open mouthed at the demo as the camera flew down through the clouds into the stadium. Awesome!

I was rubbish at FIFA 96. And FIFA 98, 2000 and 2001. I think the last copy I bought was FIFA 2002 and not long after I switched to the Mac.

The wilderness years
At that point my gaming pretty much stopped for a few years. I didn’t even worry too much about games on the iPhone and later the iPad. The odd puzzle game perhaps, but nothing too time consuming. I bought a console when I turned 40 which, embarrassingly, was a Nintendo Wii.

My reason for choosing the Wii was so that I could play games without having to learn complex button combinations (always a problem for me) and also so that my wife could join in. She’s a total technophobe so would only play games where either I can “drive” (like Toonstruck on the PC) or that she can control by throwing her arms about (as on the Wii).

Even by the time I purchased my Wii, it was already a little of out of date. By then I had an HDMI 1080p TV set and the Wii’s 480p connecting by composite video was clearly not ideal. However, the games were fun and it still sits by my TV set, despite not having been switched on for about 3 years.

Deciding to have a mid-life crisis and buy a proper console
With hindsight, I think that the main reason I’ve never been all that successful at games is because when I was really trying to play them, I had a PC with a kind of low- to middle-of-the-range graphics card. PC games at the time also suffered from users having different controllers so most games were best operated via the keyboard. This led to a poor quality experience for many games. I probably should’ve invested in a console back then, but it was easier to just give up and not bother.

At some expense, I purchased a licence for Elite Dangerous on my Mac Pro. With my 2013 “trash can” Mac Pro the graphics are pretty good, however it has the same problem that I had back on the PC 20 years ago – you will never have the optimal gaming experience unless you have the best possible hardware to play it on. And I don’t. I even tried buying an extra Windows licence and installing the game under Bootcamp just so that I could use Crossfire for the Mac Pro’s graphics cards (and also run the latest version of Elite as Frontier stopped supporting the Mac in the latest releases, annoyingly). While being mildly distracting, I still feel like I’m not having quite as much fun as I would if I had the best possible hardware and didn’t have to spend ages trying to get the best looking view on my 4K monitor.

Fast forward to Autumn 2016 and I’d been thinking about buying a console for a while to help with much needed relaxation. I’ve had no holiday this year and it’s all work, work, work. Much as I enjoy my work (and I do, I’m a geek, I can’t help it) it would still be nice to unwind at times. Much of my unwinding is achieved by walking the dogs, drinking real ale and playing my saxophone. Not all at the same time, although that’s an interesting thought…

With my re-kindled interested in gaming gaining momentum, I read that Playstation were bringing out the Playstation 4 Pro. That was good timing as I was wondering about holding fire completely and waiting for the Playstation 5 (which might be another 2 or 3 years away) or Microsoft’s updated X-Box One code named “Project Scorpio” due out in late 2017.

In the accompanying video (above) I go into more detail about why I chose the PS 4 Pro over the X-Box One S (the current incarnation of the X-Box One that will be replaced next year). The specs of the PS 4 Pro look great and so I decided to pre-order it for delivery on the day of release.

The second video (below) shows the PS 4 Pro arriving!

At the time of writing I’ve had the PS 4 Pro for nearly 2 weeks and for all of this time it’s been sitting in a box in my dining room. I won’t open it until my birthday in December, as it’s my gift to myself (via my wife).

I’ve bought 2 games. FIFA 17 and Project Cars. I’m also planning to use it to stream content from Amazon Prime and Netflix (although the latter is available on several of my set top boxes so I don’t need to use the PS 4 for that).

Hoping I do better
It’s my sincere hope that I will do better with gaming this time around. From spending my entire gaming life using less than adequate hardware, I now own the highest spec console on the market… at least until Project Scorpio, but that’s a good 12 months away.

I know people with custom PC’s will say that they have the best hardware – and they do, however for me an expensive top-of-the-range PC build was out of the question. I just wouldn’t use it enough. My hope is that once the PS 4 Pro is set up and the novelty wears off I’ll still find that I want to use it. And not having to think about settings and configs and knowing that what I’m playing has been optimised specifically for the platform will help a great deal, I’m sure.

Now, the next thing will be a 4K HDR TV to plug it in to…

Changing from a Canon cartridge printer to an Epson EcoTank

For the last 6 years I’ve been using a Canon MX 870 printer. It was a good printer, the quality was fine and I liked the fact that it had multi-functions and 2 paper trays. However there was one fatal flaw with the device that meant that it just had to go!

For some time now (years, probably) the printer has suffered from an identity crisis. Sometimes it would happily advertise itself as available on my network and churn out prints. Other times, it would be switched on and active however not show up on any device. Worse still was when it sat somewhere in the middle – advertising the fact that it would accept prints but then blatantly ignoring all my requests to actually do so.

The problem was “error 300”. Lots of people seem to get this with Canon MX series printers. It’s all over support forums on the interwebs and not just for Mac users (although OS X does some to be prone to it).

screen-shot-2016-06-04-at-10-26-57

After suffering with this issue for far too long, I decided earlier this summer that the printer had to go. I started looking around for an alternative.

Getting on board the tanks
I’d first come across the Epson EcoTank series when it was released in 2015. I like this concept very much. These printers don’t use traditional cartridges, instead they have large ink tanks on the side into which you squirt the liquid that will ultimately make up your prints. The ink is cheap and because the tanks are so large Epson claim that you can do somewhere in the region of 11,000 pages once you’ve filled up. Obviously this depends on how much coverage there is on the paper, but however you look at it, this is considerably more than cartridge hardware can manage.

Because the ink is so cheap, Epson make their money on charging more for the printer. If the EcoTank 4550 was a “normal” printer it would likely be a third of the price but then you’d end up paying more for inks. Even if you use third party options (some of which are pretty awful) the cost per page is still going to be much higher than the 0.5p of the Epson.

Review
I’ve posted a video review of the Epson EcoTank 4550 to my YouTube channel. You can view that here. In it, I unbox the printer, add the ink, set it up with my Mac Pro and then try some test prints before giving my overall impressions of the unit.

The video was recorded in June 2016 and added to YouTube on 10 September (I take my time editing, clearly).

Other EcoTank models are available with different features to the model I review. Check out the Epson web site in your local country for details. I mention special offers in my review, both of which have now ended.

I am not writing this review or posting my video with the knowledge or support of Epson. I purchased the hardware as a consumer using my own money and my comments are entirely down to what I think of it. You should always take on board a variety of opinions and decide yourself what to do before making any purchase.

Changing from an ISP supplied router to an Airport Extreme

Like many people, for some time I’ve been using the router that has been given to me by my internet service provider (ISP) and, mostly, it’s been something that I haven’t given an awful lot of thought to. However, network problems are not unusual so perhaps the time has come to think again.

When I made my first faltering steps onto the internet back in the early 1990’s you had no choice but to buy the hardware to enable you to do it. You even had to install and configure the software – something that on my Amiga 4000 took several days to accomplish. Even when use of the interwebs became more commonplace it wasn’t unusual for a service provider to expect you to have your own equipment.

Fast forward to the present day and modems have been replaced by routers and some ISP’s insist that you use their own hardware and won’t entertain the thought of you using anything else.

Luckily for me, BT Business (who supply me with my FTTC – fibre to the cabinet – connection) are happy for me to use whatever hardware I wish, however they will only support use of their own if you experience problems. When I moved to BT they sent me a BT Business Hub 3 which I later upgraded to a BT Business Hub 5. (Both of these are the same as their “home” counterparts – it’s the firmware that’s slightly different.)

Picture of BT Hub 5

BT Business/Home Hub 5

Using both the Hub 3 and the Hub 5 has generally been OK, apart from occasional wireless problems and sometimes (especially with the Hub 5) broadband would die completely for some time necessitating a reset of the router to factory settings (and the subsequent manual re-applying of various settings). Apart from the latter problem, I thought that intermittent wireless issues were normal. Luckily, I use ethernet cables for most of my network so for most of my time with BT the problem hasn’t been a huge one. However as I purchase more and more mobile gadgets, the wireless infrastructure I need becomes more critical so this was starting to turn into a very real problem.

It was then that I decided to take the plunge and look into purchasing a third party solution. Conversations with BT were not all that helpful (they did their best, but seemed inclined to blame my local network for problems rather than consider the idea that their hardware is not up to scratch). There are many third party options out there and prices vary widely. As I have a predominately Apple based system, I decided that an Airport Extreme would be the one to go for. This is a mid-priced solution for an 802.11ac device so was well within the budget for this project.

Installation
I ordered it direct from Apple and the installation was pretty smooth – I had more problems extracting the box from the cellophane! You can see me unbox the new device in the video that accompanies this blog.

The main advantage to switching from the Hub 5 was that I was able to go back to using the BT Openreach VDSL modem. The Hub 5 has a modem built in, so if you’re replacing a device with an integral modem with a third party router then you may find a problem here. If you don’t have a dedicated VDSL modem then you can either purchase one, or use your existing router in bridge mode. How to do this is beyond the scope of this article or video.

In the video you can also see me set up the device using Apple’s OS X based application Airport Utility.

This is a different approach to most other routers out there that pretty much universally use a web browser to adjust their settings. Having an OS X application does mean that if you’re using a Windows machine then you’re out of luck (unless there’s a Windows version of Airport Utility that I don’t know about).

The one problem
In my video I mention a problem I had with my set up. And this is one that anybody changing to a third party router could stumble across.

My BT Hub had a static IP address of 192.168.1.254. The Airport Extreme uses 192.168.1.1. This small, but significant difference means that you may need to update devices on your network to request their IP address (and route their traffic) via the different gateway IP.

I needed to update several devices on my network to point at the 192.168.1.1 including my VOIP telephone, Cisco switch and Synology NAS. Once I’d made the change, my internet connection and network all came to life.

Summary
In the half hour video above I go into detail about the unboxing, installation and, in summary, whether it’s worth upgrading from a free router to a paid one – and a paid one that costs a pretty penny! Has it been worth it? Have my network problems disappeared? Watch the video to find out!

UPDATE: Since making the video I’m pleased to report that the issue where I had to reboot the Openreach modem has not happened again and I’ve been running successfully for several weeks now.

Am I a grumpy old git, or are we trying to kid the next generation that coding is really easy?

Someone coding

I’ve been a programmer for over 30 years. That’s not a boastful statement (at least it’s not meant to be) I’m simply saying that I’ve clearly been around the block a few times. This article is about coding and specifically why we’re pretending to young people that’s it’s an easy thing to do.

So the purpose of this post is to complain, in a somewhat grumpy-old-git kind of fashion, about initiatives like “The Year of Code” and, perhaps to a lesser extent, the “BBC Micro Bit”. But more about why I want to whinge about those in a moment. First of all, a history lesson.

A life of code
As a developer, it’s important to speak lots of languages. Not the ones where you battle with a phrase book to order a cup of coffee in a foreign land and end up asking to sleep with the barista’s sister. No, I’m talking about coding languages.

I started off, as many do, with BASIC. Beginner’s All [Purpose] Symbolic Instruction Code. It’s deliberately simple to use and understand (it’s not called BASIC for nothing) and has always been a good way for budding coders to discover what communicating with a computer is all about.

My BASIC experience started on a Sinclair ZX-81. It had 1KB of RAM, although I plugged in a rather wobbly 16KB ram pack to boost this up by quite a margin. In those days, all you had to look at was a flashing cursor on a black and white screen and there was no audio. You learned very quickly that coding can, at times, be quite difficult; that you need to be patient and methodical and instant gratification is unlikely until you’ve committed to several hours of work and solid debugging. It’s a salutary lesson in the kind of personal attributes that you require in order to be a successful programmer. If you’re the kind of person who gets frustrated easily, then coding won’t suit you. Surely, as we’ll discuss in a moment, it’s a good idea to find that out sooner rather than later?

I found that I loved coding. It didn’t always go to plan. I couldn’t always get the computer to do what I wanted it to. But that didn’t matter to me. It just made me want to discover more so that I could make the device dance to my tune.

From BASIC I went through a number of languages on different systems. Some I used for a while, some I still use parts of and yet more are long forgotten. I wrote an integrated Office suite from scratch in Pascal for my A Level Computing Science. On my Commodore Amiga I dabbled in an interesting coding language called FORTH (I wonder what happened to that). And, of course, I’ve used C, C++ and in more recent years Objective C, Swift, PHP and more database and web based technologies than seems sensible to list here without boring you ridged.

Each language has its own peculiarities, advantages and disadvantages. Sometimes you have no choice what language to use. For example, for many years Objective C was (almost) the only answer for iOS and Mac OS X development. That changed in 2014 and I’ve recently started using Apple’s new Swift language and am really enjoying it. One of the attributes I mentioned earlier is being open to change like this. The shifting sands of technology. Sometimes those sands will shift so quickly that you’ll have to change the way that you work completely. That’s just the way it is in coding, and it keeps us geeks on our toes!

After those 30 years (and then some) I’m still discovering, learning, failing (often) and debugging constantly. And I love it. Because I’m the right kind of person to be a programmer.

Year Of Code

So time for my moan
There’s an initiative that’s been running for a while now called “Year of Code”. I don’t claim to know much about it, but what I do know is that it encourages young people to get into computer programming. Or at least that’s its intention. There are other initiatives, such as the recently launched BBC Micro Bit, that aim to do similar things.

Let’s get something straight from the start – I have no problem at all with trying to get people to code. In fact, I think it’s essential that we do; my complaint is how we seem to be going about it.

In 2014, The Year of Code with backing from the British government appointed a figurehead for the UK’s campaign: Lottie Dexter. She’s one of those bubbly “can do” sorts that, I guess, the people in charge thought would appeal to the media. I have nothing against Ms Dexter and I don’t blame her entirely for the situation that she found herself in, however it was still mind bogglingly vomit inducing. I saw her interviewed on BBC Newsnight one evening where it turned out that she didn’t know anything about coding. She didn’t even know what it was. She was planning to “learn it over the next year” and claimed that teachers could “be trained how to educate students in computer programming in a day”.

Clearly, no-one had bothered to check if Ms Dexter had the necessary skill set to be a coder, because (obviously) anyone can do it. Anyone can be an Ada Lovelace or an Alan Turing.

Thanks very much for belittling my profession and hard earned years of experience. No, really, thanks.

This, in a nutshell, is what annoys me about initiatives like “Year of Code”. The total lack of understanding and the failure to accept that programming takes a certain type of person and many years to perfect (if, indeed, one ever does). Like every single profession, programming is the right job for some people and the wrong job for everyone else. These schemes all seem to think that absolutely everyone can code.

I’ve got news for you: They can’t.

And there’s more…
Around the same time as Lottie-gate, I remember watching an interview with a child, no more than about 10 years old. The interviewer was saying enthusiastically that the little darling was a programmer and had written their own game. “Isn’t it amazing!” gushed the TV airhead. Well, no it wasn’t amazing really. The “programming” in question was an environment with some elements that were dragged on to a virtual canvas and then you keyed a value into a handful of boxes on screen to decide what the “program” would do. No coding was harmed (or written, for that matter) in the making of that “game”. What that had proved, was that the child could type a number and use a mouse. That hardly constitutes coding in my book.

Once again, this is an example of sending a signal to children that coding involves a simple bit of drag and drop or, in the case of the BBC Micro Bit, a few (very) simple commands in order to make some lights flash. The BBC Bite Size page on coding ,incidentally, doesn’t even spell “program” correctly, but I digress (not for the first time in this article).

So what should they be doing?
What  really needs to be done is to demonstrate to children what programming really is, warts and all. I’m not suggesting that an 8-year old should be subjected to 6 weeks of intensive C++ object orientation techniques; however the use of fundamentals in coding in schools in order to find those who have the aptitude for it would be no bad thing. In all subjects you start with the basics.

Take physics, for example. Those in year 6 won’t be discussing the plausibility of multiple universes or the finer points of string theory; they’ll be playing with weights and discovering the fundamental principles of the laws of gravity. Similarly with coding, a language like LOGO would be a good introduction but alongside the basics of binary and solving logic problems. It’s not rocket science. Those who like it will be open to the more complex sides of coding while being under no illusion about what it’s really going to be like as a profession.

In summary
You may think that I don’t want young people to learn coding because they’ll take my job away from me. That couldn’t be further from the truth. As we become ever more reliant on technology, there will be plenty of coding opportunities for all of us. However my point is that coding isn’t something that just anyone can do, and no amount of poorly thought out initiatives are going to change that.

Coding is time consuming, it often involves working to tight deadlines, and it’s not unusual for projects to change completely during their development. Don’t be surprised if you’ve spent hours working on something only to find it canned and replaced by a new requirement at the eleventh hour. It happens. Spending many hours looking for one tiny bug that might be down to a single comma or a full stop isn’t unusual; you need to have the patience to have days that go like that. However, to see a web page, a computer or a tablet do something just because I’ve programmed it to work in that specific way never gets boring and there are plenty of other devices out there that need coding in order for them to come to life.

Just because I code for a living, it doesn’t mean that I no longer code as a hobby. I was looking for a gadget recently and couldn’t find the exact spec I wanted, so while the majority of people would prefer to purchase a finished product, I’ve decided to build and program one myself most likely based on a Raspberry Pi core. I’ll do it because I can. I’ll do it because I have the right kind of logical brain that will happily work through all the inevitable problems to get to a working solution. That’s the joy of coding for me. It won’t be easy, but it will be very rewarding.

And that’s the idea that we need to sell to those children who have the aptitude to do likewise.

I’m now posting my articles both on my blog and also to Medium. You can read this article on Medium here. Please follow me on Medium here.

Why the EU Remain campaign needs to start being positive

If you’re in the UK and haven’t been living under a rock for the last few weeks, you will be well aware that on 23 June 2016 everyone of voting age in this country will be able to have their say on whether we stay in or leave the European Union.

It’s a big decision and, unless you’re predesposed to one side or the other, very difficult to decide which way to jump. I’m very much undecided at the moment; so firmly sat on the fence that I’m at serious risk of getting splinters.

The campaigning has hardly started, of course, and we haven’t even had an official decree of which two campaigns will be deemed the authorised voices of the “outers” and “inners”. What’s happened so far, however, is a bit disappointing. The “inners” seem to be basing their campaigning on why it would be the end of the world to leave. One day, they were saying that it would take two years for the UK to negotiate “brexit” with the EU. A few days later this was five years and then shortly after that 10.

Of course, no-one knows how long it will really take but that doesn’t seem to matter to “Campaign Leave” — if it sounds awful, they’ll just say it and then when that’s worn off they’ll make the same point again just with bigger numbers to grab a few more headlines.

They might be correct in what they say, however this scaremongering is simply baffling any of us who are trying to make sense of it all.

The “outers” aren’t doing much better. Their campaigning seems to consist mainly of calling the opposition “Project Fear” and saying that the latest figures quoted and comments thrown about by the “inners” are a load of rubbish.

They could be correct too — but who knows?!

My plea to both sides of the argument is to stop what seems to be the default position in any vote — the need to slam their opponents without bothering to give positive reasons on their own position.

I hope that some time between now and voting day, we can start to see some clear factual reporting of what’s correct and what is not and perhaps we’ll all be able to vote with a clear idea of why we’ve gone for one particular option over the other.

It’s either that, or come 23 June polling booths up and down the country won’t be as silent as they normally are, but instead will echo to the sound of tossed coins falling on the floor.

I’m now posting my articles both on my blog and also to Medium. You can read this article on Medium here. Please follow me on Medium here.

Introduction to the Apple Watch

The Apple Watch launched in April 2015, so this blog post and video recorded in January 2016 is, in product terms, quite late! However, like many people, I purchased an Apple Watch (in my case the Sport edition) for Christmas 2015 and these are my thoughts.

The video features a complete look at the Apple Watch with an unboxing, pairing with an iPhone, looking at some of the default installed (and a couple of third party) apps. I also talk about the good and bad points of the device as well as giving a recommendation of whether, in early 2016, you should be considering purchasing one yourself.

I can’t promise that the demonstration section of this video is exhaustive (it most certainly isn’t) however I hope that during the half hour that this film runs (by far the longest video on my channel so far) you’ll pick up some hints and tips that will be helpful if you’re considering buying a smart watch yourself.

As always, I’d love to hear your comments about the Apple Watch, competing products (like Pebble and Android Wear) and also what you think of my latest attempt at a video!

Quick tip for Synology users with a Mac

Here’s a quick tip for anyone with a Synology NAS drive who uses it with a Mac running OS X El Capitan (although this should work under Yosemite and perhaps even Mavericks).

When listing directories over the network there can, for particularly large folders, be a noticeable delay before the directory contents is displayed on screen. Thankfully, a delay like this is pretty rare however this tip will speed up network directory listings for folders of all sizes – you’ll just notice it more if the folder in question contains loads of files.

Fire up Terminal and run the following command (do not run as super user using sudo):

defaults write com.apple.desktopservices DSDontWriteNetworkStores true

Reboot your machine and then marvel at the increased directory listing speed!

What does it do?
This simply prevents the hidden file .DS_store being written to network devices and this speeds things up monumentally!

However, a word of warning. Synology’s forthcoming DSM 6.0 has support for OS X’s Spotlight!!!!!  This is a major improvement and I’m so, so happy about it. Details are sketchy at the moment, however I suspect that network shares will need to be mounted in AFP rather than SMB (which has been the default protocol since OS X Yosemite). I prefer SMB (find it more stable) but will be willing to give AFP another go should Synology’s Spotlight implementation require it. Anyway, I digress. The reason I mention this is that it’s entirely possible that Spotlight support in DSM 6.0 will require .DS_store file to be written to network shares (as that’s where important Spotlight information is saved) so this tip might be out of date soon. (It just makes me wish I’d known about it 18 months ago!)

So if you need to revert back to enable Spotlight in DSM 6.0, just enter the following command in Terminal under your user and then reboot:

defaults write com.apple.desktopservices DSDontWriteNetworkStores false

Note that this is a user level setting, so if you have multiple accounts set up on your machine then you’ll need to run it for each of them.

As always, this advice should be taken at face value and you enter commands into Terminal at your own risk. Blah, blah, blah. 😉

A demo of my sit/stand desk

Regular readers (who are they?) will be aware that I have a desk that enables me to raise or lower it so I can use it in a traditional sitting or a standing position. This kind of thing is ideal for an accompanying video, so here it is!

I’ll be making a more detailed video about my desk at some point in the future. I’ll be talking in that about why I went for the solution demonstrated here and will mention other products that are on the market.

In the meantime, if you want to read my original blog post from July 2014 about why I decided to buy a sit/stand desk, you can do that here. You can also read about why I chose this desk here, although as that article was written nearly a year ago the second part to my video will update my conclusions in that blog. So watch this space for more!

Thank you for watching and please subscribe to my channel to see more content like this.