Workaround for IKEA Bekant Sit/Stand Desks

When I set up my home office in 2015, I knew I wanted a motorized standing desk. The easier it is to raise and lower, the less commitment there is in standing up. The less commitment, the easier it is to stand up. At the time, IKEA had just released their Bekant Sit/Stand desk for about 2/3 the cost of the competition. I wish I had paid more to get a better desk.

The problem: It’s well documented that the Bekant has a power supply issue. The symptom is that it stops after raising just a few millimeters, then later refuses to move at all. This is caused by the power supply not putting out enough juice, according to some reports. My desk got stuck in a standing position for a week, maybe the longest work week I’ve had.

The workaround: Unplug the power supply when you are not actively raising or lowering the desk. Leave it unplugged until the moment you feel like switching between sitting and standing. I’ve been doing this for the past week and have had extremely consistent success. My power supply is hanging in the netting under the desk so I just need to pop the cord into that. No need to mess with the outlet.

The real fix: Keep returning your power supply to IKEA until you get one that works. Some people report never having an issue, and maybe you will get a power supply as glorious as theirs.

The good news is that the desk has a 10 year warranty – if you have your receipt. And your return will be much smoother if you get an employee who doesn’t demand that you disassemble your entire desk just for a detachable power supply.

I was lucky to have an employee tell me that they accept scanned copies of receipts, so mine lives in Evernote and comes out once a year when I want to try to fix my desk yet again. It takes about an hour to do the exchange in my experience, and I’m on my 3rd power supply. I also happen to live 10 minutes from an IKEA; I can’t imagine driving a couple of hours to do the exchange.

You may have some luck in contacting Rol Ergo directly for a replacement, according to this Facebook comment.

[Photo credit: mastermaq under CC 2.0 BY-SA license. Because my desk is in no condition be seen by anyone.]

How to Install Movable Type 2.64 on macOS Sierra

I started this blog in May of 2003. I had a LiveJournal at one point, and even wrote my own blog system to teach myself a new language called PHP. But this blog, 90% Crud, started then. I used Movable Type, a Perl CGI application. I wrote some stuff, met some good folks and was inspired to do some neat stuff.

I started this blog in May of 2015. My friend Adam and I wound up looking at some old blog posts with one day and there were some good ones. I thought that maybe that was something worth doing again, even if I never really figured out what I was doing the first time around. I set up a WordPress site and started writing again.

Now that this site is at a new home and I’m working for the WordPress.com company I thought I should finally get my old archives into my new blog. And I did, I managed to export my old blog and import it here. But there was some manual work and I actually had to look at what my old blog was.

Screen Shot 2017-01-04 at 9.16.32 PM

For the record, I try to look at nostalgia as an indulgence. Too much time looking back keeps you from looking ahead. But its still something you should do, from time to time.

Seeing the old posts, I’m struck by how many comments there are. I guess there are plenty of comments on blogs these days, but now they’re all on the ephemeral social shares instead of the blog. 20 comments on Facebook, 0 on the blog. That makes sense in some ways.

I see some of the high points, especially this one, but I like some of the lower-key ones too. An old dog who chews earbuds. Old video games. Citizen journalism (as we called it at the time) with ancient snap-on cameraphones. Nerdy tomfoolery. I had completely forgotten that I had written a Movable Type plugin.

Looking back helps me figure out what I should put here in the future. I’ll be posting more personal stuff here. I’ll keep posting political stuff when I’m fired up. And I’ll try to keep doing projects, even as my free time dwindles.

Anyway, enough delay, I know you want to get Movable Type 2.64 running on macOS Sierra. Here’s how:

To start with, make sure you have a copy of Movable Type 2.64. Maybe in your backups somewhere. Do a find, because it’s not in the directory that you think. Look for mt.cgi. Put that in /Library/WebServer/CGI-Executables and the corresponding static assets in /Library/WebServer/Documents/.

Edit /etc/apache2/httpd.conf and uncomment AddHandler cgi-script .cgi. Marvel that we used to write Perl CGI scripts, ignorant of how slow it was to spin up a new process for each request. sudo apachectl restart

You’ll also need a MySQL dump of the database. You have it somewhere, even a decade and a dozen computers later.

Now you should brew install mariadb since you’ve heard thats what people use now instead of MySQL. Load that up with good old mysql -u root < mysql-dump.sql

Go to http://localhost/cgi-bin/mt/mt-check.cgi and realize you don’t have the DBD::mysql Perl module installed. Try sudo perl -MCPAN -e 'install DBD::mysql' but some of the tests fail for some reason. Find the directory in ~/.cpan/build, do a make install --force and hope those tests didn’t matter. Be glad for your time as a Perl guy.

Go to http://localhost/cgi-bin/mt/mt.cgi and see a login screen! Realize there’s no way you know your password. Here’s the query you need to change it. Now you’re in.

Time to start blogging.

Please stop pretending you know how much bandwidth BitTorrent uses

I’ve seen this mentioned a lot when people talk about BitTorrent, but this bit from an announcement of BitTorrent 4.20 happened to push me over the apathy threshold:

It’s no secret that bandwidth concerns have been one of the more pressing issues surrounding the BitTorrent community. CacheLogic, which provides P2P caching solutions for ISP networks, has previously calculated that approximately 60% of a networks bandwidth is consumed by the BitTorrent protocol. This average varies according to the ISP, as some ISPs report less bandwidth consumption and other reporting more.

They’re completely wrong about what the CacheLogic study says. The most recent numbers I could find from CacheLogic say that “P2P still represented 60% of internet traffic at the end of 2004” and “By the end of 2004, BitTorrent was accounting for as much as 30% of all internet traffic.” Even if P2P has grown in the past 18 months to consume 99.99% of internet traffic, CacheLogic’s own studies show that eDonkey surpassed BitTorrent in P2P traffic in August 2005. If CacheLogic’s numbers are correct, there’s no way that BitTorrent has more than 50% of internet traffic.

But that’s the real issue here-are CacheLogic’s numbers correct? Look at what CacheLogic sells: P2P caching appliances. Their entire business is built around reducing the amount of bandwith P2P applications use. And they are also the sole source of numbers saying that P2P applications are using lots of bandwidth.

I’m not saying that their numbers are all wrong, I’m saying that I don’t know what the truth is. A press release from a company that has a direct and obvious profit motive from over-hyping shouldn’t be treated as a solid fact. Unfortunately a highly-suspect number is far more attractive to a writer than saying “I don’t know what the truth is.”

CacheLogic seem to have been pretty successful at getting their numbers into the collective consciousness. Traditional media like Wired Magazine, BBC and Reuters trumpet the numbers as if they were a fundamental rule of the internet (like Rule #34: There is porn of it. No exceptions.). Then, the numbers are repeated ad nauseam until sites like Slyck News can pepper a story with them without even needing to cite the source, since everyone knows it’s true.

Let’s stop pretending we know things that we don’t. There’s nothing wrong with saying “I don’t know,” there is something wrong with pretending you know what you really don’t. Let’s get our numbers from someone who isn’t trying to sell us a solution to the problem the numbers describe.

(For more skepticism about CacheLogic’s numbers, check out Peter Sevcik’s piece at Business Communications Review)

From the Onion to the NYTimes

Do you remember my Point / Counterpoint post from September, where I pitted changes at The Onion against changes at The New York Times? Probably not, but thanks to hyperlinking you can pretend you knew all about it.

Anyway, the New York Times, obviously concerned with my feelings on the matters, took it upon themselves to hire Khoi Vinh from Behavior (via). Mr. Vinh was part of The Onion’s redesign and was no doubt hired to give the Times the same credibility as “America’s Finest News Source.”

I can only take from this acquisition a sense of the immense power that I wield with my blog, and will promise to only use it for good. This will largely be accomplished through posting infrequently about hyper-technical topics, per current operating policy. I also look forward NYTimes.com switching to Drupal.

Mechanical Turk data point

While this shouldn’t be considered a definitive guide to Amazon’s Mechanical Turk, my friend posted his experiences to his private blog and I thought I’d share (with his permission, of course).

First off, if you’re not familiar with the service it’s a way to pool human talent over the Internet. There are some tasks that people are just better at, like typing the name of a pictured album cover or transcribing a minute of audio. Amazon pays people to perform these tasks (called “Human Intelligence Tasks” or HITs), and people can pay Amazon to have these tasks performed. Ingenious, the same way eBay or the Wikipedia is.

OK, here’s what my friend in the program had to say:

There are a few problems with the system that make completing HITs more difficult than it needs to be. For example, you must first look at a possible task, then click to accept the task, then complete the task and hit submit. That’s a 3 step process that could be easily streamlined. You’re wasting time that could be used more efficently completing tasks. A lot of the time, by the time the page loads and you click to accept a task, someone else has already accepted it forcing you to do it again. Luckily the geek community has come up several greasemonkey scripts that automatically accepts hits for you and makes submitting them easier by stripping away extraneous images and text. I personally use TurkOp with the Opera web browser. I use Opera simply for the fact that it’s a different browser than my primary browser, FireFox. This makes it easy for me as everything is contained in a seperate browser that I can have set up just for that task, while leaving other web browsing alone.

[…]

I have been doing this in my spare time for about two weeks now (a few minutes here and there, or maybe I’ll sit down for a session on the weekend and blow through a few hundred) and I’ve already earned over $150.

I was surprised at how much money he was able to make in the program. It’s not enough to live on, but it’s more profitable than spending downtime playing a Flash game (or, uh, blogging). Anyway, I thought it was interesting to find out how the program is paying out and figured I’d share.

Buy.com followup

I’ve got a followup to my earlier post about Buy.com. My girlfriend submitted a complaint to Michigan’s Attorney General’s office on their website describing her experience. They then contacted Buy.com, who decided not only to finally ship her bag but they also gave it to her for free!

It was certainly a good resolution on their part, but I think she would have rather just gotten the bag at the listed price when she bought it without having 4 months of uncertainty. It’s a shame that she couldn’t resolve this with Buy.com’s support and was forced to contact the AG.

del.icio.us search and tag tip

Searching del.icio.us is slow, finding links by tags is fast. Always look for a bookmark by tag first, then search when that fails, and then fix the tags.

For instance, I was looking for Notes on developing a web service API so I typed http://del.icio.us/revgeorge/webservices into my address bar. When it didn’t show up, I searched del.icio.us for WebJay and found the link.

Here’s the minor magic: instead of going directly to the link I edited it to add what I had initially guessed it would be tagged as. That way I’ll be able to find it by tag next time, which is a lot faster than searching, since I have a pretty good idea what tag I’ll be using next time.

Not rocket science, just a good habit to get into.

Of course next time I need that specific link I’ll probably just go to my search bar instead of my address bar, because I remember what I blogged better than what I bookmarked. But that’s just me and my crazy, kooky brain but you get the general principle.

Point / Counterpoint

POINT

Here’s a bunch of archives

By The Onion
America’s Finest News Source

The Onion has opened up its archives to the world for free. We used to think that charging for access to the archives was a good idea, until we did it.

Our accountants have compared the ad revenue for non-protected content against the meager subscription revenue and ordered our web team to open up the archive.

While we are America’s Finest News Source™, we are such only in so far as it makes us money. Lots of money. We’re not all that concerned with an “informed populace,” except where that populace is informed by our advertisers. Therefore, we are happy to show you our complete archives online, with informative information from our many sponsors.


COUNTERPOINT

There’s a bunch of archives

By The New York Times
Gray Lady

The New York Times has opened up its archives to the world. Over there, don’t you see it? Oh, I see the problem, it’s $40 to access our archives. What? You paid your $40? Oh, you must have already looked at 100 archived things this month. Well it’s your own fault, really, why are you so interested in our archives anyways?

Listen, we’re the paper of fucking record. We make sure the public is informed, and the best way to do that is to protect our archives. You wouldn’t let a kid covered in mud sit down at a grand piano, why would you want just anyone to browse through your archives. That’s right, your archives. The archives are yours and mine and everyone’s, assuming everyone coughs up $40.

Oh, you want us to be more like The Onion? Well I got news for you, kid. We’re so noble in our aspirations, we barely have any ads (unlike a certain other paper I could name). Take an article page example. We have a mere one interstitial, and then when you finally get to the page we only use half the page width for our giant towering ad, aside from the assorted ads at the bottom.

We’re not a big organization like The Onion either. Sorry, we’re just not. How many people do they have, 100,000? A million? No one really knows, but how do you expect us to be able to compete with a giant newspaper like them? We’re just one little paper, we can’t be held to the same standards for openness as The Onion.

Geeks want to help too.

One of the things I saw about the relief efforts for Katrina is the number and diversity of missing persons sites. Geeks want to help too.

One of the problems that was obvious to the geeks was that there were data about missing people that needed to be shared, and so the geeks pitched in and set up databases to share that information. The Red Cross Family News Network, a database from the National Center for Missing and Exploited Children, one from MSNBC, one from CNN, and lots more.

So now if you are looking for someone, you need to search a bunch of websites to find them and you’re never quite sure if there’s one more site that you missed that says the person you’re looking for is OK. Tyranny of choice indeed.

By trying to solve one problem, the geeks caused another one. It’s understandable. Geeks want to help too and this is how some knew how to do it.

Telling people “leave it to the experts” doesn’t generally work, and maybe that’s a good thing. The solution needs to allow people to pitch in and help, while at the same time getting the best information out.

Before the next emergency, the Red Cross should create web services that allow anyone to tie into the Red Cross’s search facility. This allows people to pitch in and help, to do smart things that make the search more useful, while at the same time getting the information out to everyone who needs it.

There are a few ways to do it, such as allowing sites to query the Red Cross for searches (to allow mashups like missing people on Google Maps or SMS) or aggregating searches like A9 OpenSearch. Those aren’t mutually exclusive and there are other things they can do too. The Red Cross’s geeks should know what’s possible with their setup so I won’t pretend to give them specific advice, just ask that they help the internet community to help them.

Yahoo!, to their credit, used their search technology to provide a search interface to many of the missing persons databases. That’s a good example of what I want to see.

I’m not saying that this is a major victory over hurricanes or anything. Now is not the time to come up with a web services strategy. People in the Astrodome do not need Friendster or blogs or web services; they need water, food and hygienic conditions. Once it’s time to start preparing for the next emergency, that’s when it’s time to start working on this.

Oh, and please donate to the Red Cross. Please.

Update: The AP talks about this exact same problem. (Thanks Ed!)

Also, Jeff Jarvis discusses this same problem and some solutions on BuzzMachine and the NPR program On The Media for Sept 9, 2005.

Also also, if you follow the “do not need Friendster” link above, you get to a blog post that I had assumed was just some well-intentioned geek far removed from the action. Instead, it’s actually a recommendation from someone who was in the Astrodome. More in this comment.

Welcome Back Me!

I’ve been on vacation for the past week and a half in the Bay Area. I mentioned this on my del.icio.us but not here, since I figured I don’t update this enough to really impact the update frequency. I loved CA. I’m still decompressing (still over 2500 RSS items to read, and that’s after unsubscribing from some feeds) so this will be even more disjointed than my normal posts.

I was reading Lessig’s The Future of Ideas and he argues for end-to-end designs and dumb networks. World of Ends makes the same argument. As web developers, are we making the network too smart? Or are web applications just more ends to the network?

Do web applications side-step the GPL? If I make a bookmark manager for the desktop using GPL libraries I’m forced to release my application under the GPL, yet del.icio.us isn’t required to be open source (and I’m not arguing that it should be). I’m not the first to raise the issue but it seems to violate the spirit of the GPL, if not the letter. Perhaps a software bill of rights for web applications needs to be drafted, something that guarantees access to my data on your server, open formats, etc.

Why can’t my bank give me an AJAX competitor to Quicken on their site? Web applications are best suited for dealing with remote databases, why deal with importing my account data into a desktop program when it’s already on their server? I wonder if Intuit has some sort of backroom deal to keep online-banking stupid, the way they keep the IRS from serving people online in any useful manner.

Airlines tell us not to operate cellphones on flights because of potential interference risks. If all a terrorist needs to take down an aircraft is a cellphone then perhaps we’re better off with semaphore. Maybe the open spectrum initiatives should rephrase their argument in the language of scare mongering. Can you tell I was reading Lessig’s chapter on open spectrum during the plane ride?

More later, maybe.

Updated to add more:

Expect to see Apple very interested in the Wine project when the x86 systems reach consumers. If OS X can run Windows binaries natively then there will be two commercial desktop OSs, one that can run Windows programs and one that can run Windows and OS X programs. It’s not a shoe-in win for Apple though, also running Windows didn’t save OS/2.