Categories
Uncategorized

To the cloud! Part 6

From an earlier post

Somewhere in the pipeline is declarative provisioning and configuration lab work which might go better in the cloud, but I might be able to do it at home, too, with up to four Hyper-V > hosts ready to go.

I also have a cloud at work, and some of the things I want to do are relevant to my job.

I keep forgetting how recent a lot of my knowledge is. That was only 15 months ago. Since then I’ve done a fair bit of provisioning and configuration automation at work. I can (and have done) tear down and rebuild about a dozen servers at a whim to set up an application cluster.

This weekend I was reviewing my old cloud posts and looking at Amazon S3 storage again and wondering why I hadn’t done it yet. Seeing how recent that post is makes me realize that because my needs are different now than they were last time I thought about S3 or cloud VMs.

I thought I had poked a couple of s3 buckets before, but apparently not or I deleted them immediately. For work and career reasons I feel I need to do more with cloud APIs, and the S3 API is immediately relevant. Ceph is a free product that became prod-ready 3-4 months ago and has an S3-alike API. While my products at work have current blockers to using the public cloud, the company as a whole wants to be cloud-first. (Although for many applications, internal cloud is required for various security reasons.) So Ceph/S3 may solve some problems for my product going forward.

So, I decided to put a site on S3. (S3 can be used as a static web host without the need for an EC2 host.) I arbitrarily picked jimnelson.us which also can be referred to with www.jimnelson.us , but the ‘proper’ has been without the www. For technical reasons (naked/apex DNS names can’t properly use CNAME) I’ll have to move some of my DNS to Amazon Route 53 or go with the www version and redirect from the naked domain.

In addition to enabling static web access I had to add a policy to allow anonymous users to read via the web interface. (Copied and modified; not sure what the “Version” is about yet.)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::www.jimnelson.us/*"
        }
    ]
}

And then I changed www.jimnelson.us to CNAME to www.jimnelson.us.s3-website-us-east-1.amazonaws.com after updating the site to prefer www. (canoncial name in html header).

I also enabled logging which goes into a different bucket.

S3 isn’t actually a filesystem, and one of my recent questions has been the different between a filesystem (what everyone is used to) and an object store (like S3). One thing is that you can’t just add bits to a file like you can on a filesystem. You have to re-upload the entire file/object. So log files don’t seem to be a good fit for S3. Curiously their built-in logging seems to simply create a new file for every log entry…or maybe it stuffs a few into each file if the hits come in close enough together.

(Another difference is mostly invisible, but each object can be in different physical places which has caused at least one person on the Internet large delays when accessing a large number of files with wildcards.)

So basically each file/object is like a book you check out and place back on the shelf in its entirety whereas on most local and network filesystems you can access bits at a time or continually add to the end of the file. (Actually now that I say that…I wonder if you had a big movie file if you could play from the middle…need to check that out. But incremental or partial updates are definitely an anti-pattern for object storage.)

Also, prices for S3 storage seem to have fallen since I first considered it for backup in 2010. In that thread I mentioned buying two external USB drives which I still have, but I almost never use them. I have automatic local backups happening a couple of places, but a drive failure could lose it all. Well, all since the last USB drive backup. With S3 (or other online service) I could have the script upload. And S3 has good security controls and automation, so I could fairly easily arrange for a backup script that has limited access to S3, can’t delete its uploads if compromised, and even have S3 rotate files to cheaper, slower storage. For a price, but in reality the USB backup idea isn’t “working”.

Google and Microsoft have cloud storage, also, as do others, but AWS is the big dog followed by Google and MS. My immediate interest is to get used to AWS’ S3 API and perhaps Microsoft’s OneDrive for Business.

Observations: After considering online backup in 2010 I bought two 2TB external drives and haven’t really made use of them. (In fact, now that I’m thinking about it I’ll do a one-time upload to S3. 54 GB of backups…see what that bill is.)

And in the past 15 months I’ve bought two new servers to replace my old lab stuff. Those I’m actually making use of, but now I’m looking for reasons to use the cloud again.

Categories
Uncategorized

The new DVR saga begins

I upgraded my last Win7 box–the Windows Media Center–to Windows 10.

I had long been intending to figure out what’s next for my DVR because MS is clearly trying to kill WMC. But it just hasn’t been a priority because I don’t watch as much tv as I used to.

I mainly just record a few shows…the late-night shows because every now and then something newsworthy happens and I can go back and watch it, some cooking shows, some science shows, and a couple of random entertainment series. I watch maybe 1-2 shows a week lately.

I had also been dragging my feet mentally designing the new setup…a backend server in the other room with the pc-at-the-tv being just a player…or maybe even have the TV’s smart features play from the backend….

But now I’m running up against the free Win10 upgrade deadline, which is a bit silly because I could probably manage a cost-free solution in the future if needed, but this is one of my three paid Win7 licenses. And I know I want to get away from WMC eventually, and I’m not using the DVR much lately, so I pulled the trigger.

I installed NextPVR. It kind of works easy, but it won’t play back video! Decoders something something. But it lets me record, and although there are some free ways to get programming guide data, it is apparently a pain to keep up so I’ll pay $25/year for Schedules Direct. (At the moment I’m on a 7-day free trial.) Schedules Direct will work for any other PVR software I’ve considered, too.

But yeah, NextPVR doesn’t include its own decoders, and apparently Win10 doesn’t let 3rd-party apps use its MPEG2 decoder. Tongue . I’m pretty sure my Intel 4500 graphics has integrated MPEG2 decoding, but as per usual with free stuff I’ll have to tinker to make it work.

But it records just fine, and I can play the recordings with VLC, and I’m sure there are numerous ways to make it work, but it doesn’t “just work” “out of the box”.

I also had to hunt around to set up the necessary items rather than have a setup wizard.

I’m also planning to install Kodi (renamed XBMC). The idea is that Kodi will be the front end that can play other videos but also be an interface for the PVR.

No rush, though. I’ve got my recordings scheduled again, and one already recorded at 8am today, and I can watch it with VLC.

Categories
Uncategorized

To the cloud! Part 5

Actually it turns out they reset the counter to 60 days…

But they apparently messed up some other stuff so I can’t actually do anything. My limits are all zero, so I can’t have any CPUs or IPs, for example. I can’t start my existing VMs or create new ones…

Decided I should check again before hitting the post button…now I have $408.91 in credits and 60 days again, but nothing works. I can’t view, create or start…

Ah ok, I can create a second “project”, and now I seem to be able to use it. Yay. Good thing I didn’t have any data I wanted to keep on the other VMs, but that’s probably a good idea for any cloud-hosted VM, anyway. I recall my self-backups are what saved the board data when that VPS hosting company lost my VPS a few years ago.

Categories
Uncategorized

To the cloud! Part 4

Hi,

Earlier this week, your Google Cloud Platform Free Trial account was cancelled in error. We have now reinstated your Free Trial account and granted the account an extra $300 in free trial > credit. We apologize for any inconvenience this may have caused.

Regards

The Google Cloud Platform Team

Hmm. Well I thought the balance went from $149 to $18 in a big hurry. If I get time I’ll try to figure out what the “actual” bill was and how much fake trial money I blew through. But I guess for now I have another 3 weeks or so of free play time.

Categories
Uncategorized

To the cloud! Part 3

I think the trick to the cloud is to automate creating new VMs when more capacity is needed and destroying them when not needed. Then, compared to paying for hardware support, power, backup systems, cooling, maintenance and periodic refresh VMs can be more economical, especially for a smaller company.

For a guy who has a handful of old-but-usable junk with a web server and some other stuff on 24/7 in his house, apparently owning is still cheaper than the cloud.

On the other hand, if I had lab cloud VMs programmed to power down/destroy after a certain amount of time it might be worth looking at again. But I’m not quite there yet. Right now I’m playing with Microsoft Server’s Tech preview which I need my own machine for. And I’m eager to try out Nano server as a Hyper-V host as well as Nano VMs. Somewhere in the pipeline is declarative provisioning and configuration lab work which might go better in the cloud, but I might be able to do it at home, too, with up to four Hyper-V hosts ready to go.

I also have a cloud at work, and some of the things I want to do are relevant to my job.

Categories
Uncategorized

To the cloud! Part 2

Google had the most reasonable trial plan. You got 60 days and $300 of services, and if you ran over it would just pause them until you decided to pay and upgrade.

30 days in, and I’ve blown through the $300 :-O . I wasn’t paying attention because it’s not real money. I’ve had one VM on since I started, and I fired up two more for some reason or other and never bothered shutting any of them down. $100/month per VM sounds a bit rich for my lab needs, and since I am the lazy, forgetful type I am reconsidering how much experimentation I want to do in the cloud. I’ll definitely do some things, but I think I’ll make a point to stop and destroy all VMs immediately; perhaps even set up a self-delete script in case i get distracted.

I may hang on to my lab machines for a while longer.

Categories
Uncategorized

To the cloud!

I finally signed up for an Amazon Web Services account with the intent of kicking around their EC2 VMs. I was going to say I don’t expect to immediately move any sites there, but then again why not? Not this one, but I have multiple unimportant static sites…might as well throw them out there to see if I learn anything from the experience.

I’ll probably also sign up for Google’s Cloud which I think is called Google Compute Engine.

I was going to sign up for Microsoft Azure, but I think I’m going to get an MSDN subscription through my company and take advantage of some free Azure extra benefits from the subscription.

I’ve been overthinking how to get into the cloud recently. The hesitation is that I might accidentally do something non-free, but even if I do I don’t think it will be a huge financial hit, and besides how better to learn the pricing tiers than to jump into them.

Ok, terminology correction / clarification

Google Cloud Platform is the service name that is analogous to Amazon Web Services, and Google Compute Engine is the product that is analogous to EC2. The former pair being a family of product offerings, and the latter being the main product I’m interested in: infrastructure as a service (IaaS) or more generally virtual machines.

IaaS VMs are virtual replacements of my home lab PCs. I can install a choice of operating systems on them and then configure and manage them however I want.

Categories
Uncategorized

Search Engine Optimization

I have a couple of web sites, and sometimes I read about search engine
optimization when I’m wondering how to get more traffic. It’s amazing
to me how much information there is out there and how many opposing
opinions there are.

But the following occurs to me: the search engines’ goal is to provide
relevant content to search queries, and they have to deal with all sorts
of searchers and all sorts of content providers. My goal should be to
write content that someone else might care about and leave it to Google,
Bing and the rest to put us together rather than try to tweak my page
to inch up the SERP / results page.

Oh, I’ll keep a few SEO tips in mind when making content, like using the
title as the url and–without keyword stuffing–try to include synonyms
while writing. It’s almost second-nature now; without really thinking
about it I managed to put “SERP” and “results page” in, and also “SEO”,
“search engine optimization”, “Google” and “Bing” in this short bit of
content. After that last sentence I am probably stepping over the border
into the land of keyword stuffing, but before that it is natural to want
to vary the wording anyway, especially if you’re not sure all readers
know what SERP is.

But frankly, writing more useful articles is more productive than trying
to fine-tune SEO.

What is a useful article? My most popular ones here are the ones where
I did something and simply said what I did, why and what I learned
along the way. They aren’t articles I put a lot of effort, directed
research and thought into. They are basically notes to my future self
on how and why I did something, and those are surprisingly useful to
other people, too. For example, I have quite a few posts on IPv6 where
I tried to comprehensively cover the topic. I don’t think anyone has
ever noticed. Then one day after looking up command syntax on neighbor
discovery for the nth time I wrote a quick post about how to do the
equivalent of arp in IPv6
. That’s the only IPv6
post here that gets any search engine love, and at times it’s been my
most popular article. (I later went back and added a video to the page;
it doesn’t seem to have driven any traffic to speak of. Just the text
itself–much of which is sample commands an output–is what brought in
traffic.)

So don’t spend too much time playing the SEO game. Write something useful
and let Google and Bing figure out who can find your contribution useful.

Categories
Uncategorized

Blogging in Jekyll

I migrated the blog to Jekyll. I am ensuring all the important links are
in place, but a lot of the WordPress-generated tag and category indexes
and such are gone.

WordPress is nice, but I don’t like upgrading on its schedule, and
having coded my early web pages with raw html and server-side includes
I often found myself wanting finer control over content than WordPress
would easily allow. Jekyll isn’t necessarily easy, but I can (mostly)
make it do exactly what I want.

I had a few–but not many–comments on WordPress. I am using Disqus
with Jekyll. For now the old comments live on in the “front matter”,
but I have not yet decided whether I should import them to Disqus,
add them to my static pages or just let them disappear altogether.

Categories
Uncategorized

Re: Server Upgrade

Notes to self:

The linux IPv6 router VM keeps failing, so I built another on a newer linux. For some reason it’s not successfully running the post-up commands on eth0 upon reboot, but ifdown then ifup is ok.

Also continue to set static IPv6 addresses on the physical servers and establish DNS server addresses. Losing IPv6 connectivity has caused a lot of frustration.