Dan Newcome on technology

I'm bringing cyber back

Python versions

leave a comment »

I’m a little slow on the uptake with version management of language runtimes and environments. For a long time I just followed the Linux distribution convention of global versions of dependencies and compiler versions for my system. This changed during my years doing NodeJS development where the language and tools were changing daily it seemed. Node Version Manager and n saved the day back then.

I wrote a bunch of Python after that. The shop still used Python 2 because it was during the years where the breaking changes of Python 3 were slow to propagate through the open source world of dependencies in the Python Package Manager ecosystem. So I think I had the same old version of Python globally installed for my whole time there.

Fast forward a bit and we are in a place where running everything in a Docker container is the norm. Old habits die hard and it took me a while to embrace this over the convenience of having local tooling installed on my development workstation.

Ok that was a long preamble to a post I’m writing so I remember the best way to get Python 2 and 3 virtual environments both installed at the same time on my workstation.

Virtual environments are the way. The complication is that the canonical way of creating virtual environments in Python 3 that has been serving me well is not part of Python 2. Prior to venv we had the virtualenv tool. So I will go through that here as well.

Python 3

I will start with Python 3 since that’s mostly what I’m using now and it’s comparatively easy with respect (and retrospect) to Python 2. Python 3 comes with built-in support for virtual environments using

$ python3 -m venv envname


This command will create a folder in the current directory called envname . The folder will have linked versions of python and related tools along with a local folder for installing dependencies. This environment must be activated before use. We can do that using

$ source envname/bin/activate

After that the virtual environment is active and any subsequent calls to pip will install dependencies locally under the environment folder.

I will admit to installing Python 3 on my Mac the lazy way with brew. Debates abound on using package managers to install development tools. I tend to prefer using the package manager unless I have an advanced use case for needing to be flexible and work in many codebases that might be using different configurations. It turns out that pyenv is able to install both Python 2 and Python 3 so I might start using that to install my Python 3 version as well.

Python 2

Python 2 is a little more involved. I used pyenv to install Python 2 like this:

$ pyenv install 2.7.18

Once installed it’s in a folder in your home directory

~/.pyenv

So now the trick is to create a virtual environment with a version of python in this directory. To do this we use virtualenv.

$ brew install virtualenv

$ brew install python-virtualenv

I’m not actually sure if we need both of these.

$ pyenv virtualenv 2.7.17 envname

$ pyenv activate envname

$ python –version

Python 2.7.18

Yay it’s working. So for my future self, I think I will install all Python versions including Python 3 with pyenv. But this works too.

Written by newcome

April 29, 2024 at 2:52 pm

Posted in Uncategorized

Sharpening the Saw

leave a comment »

I’ve been deepening my Maker practice over the last year or so. I’m working with physical things, making electronics happen, doing art and making things out of metal. Loosely this could be defined as art. It’s more of a multi-disciplinary exploration of building things. I’m working out details of organizational systems for tools and materials. Also of time effort and learning. My background is in Electrical Engineering, so I’m no stranger to lab work and dealing with atoms so to speak. My focus became on software however and some of the corporeal elements of my work faded to the background with the exception of things like mechanical keyboards, displays and ergonomic chairs.

My new focus on building things has exposed me to different welding processes, cutting and bending techniques and generally dealing with unwieldy physical operations. I’m always struggling with organization and process since I’m a consummate multi-hyphenate kind of person so adding so many new tools to the mix started to upset the apple cart a bit. I went back and forth on all sorts of ideas to organize my shop space and make myself more efficient. I watched so many YouTube videos on shop organization and silently wondered if some of these people actually benefit from their ultra organization. Adam Savage at least admits that he just goes one step at a time and experiments and things blow up and he backtracks and reorganizes all the time. I’m somewhere on that spectrum as well but I think that Adam is kind of an outlier in the sheer breadth and volume of stuff that is in his orbit.

Where does that leave me? I’m doing the hard work of parting with things that don’t serve me and raising the bar on what I consider essential tools and materials. In the past I would buy a lot of cheap tools and try to always have things on hand. I’m starting to peel that back a little bit and get fewer higher-quality tools. It’s a long road to figure out what works for you and I think I’m finally starting to see some light at the end of the tunnel. There is no one philosophy that works in all situations. I’m starting to realize this. Some shops are very opinionated in their organizational strategy and this most often reflects the sensibilities of the person running the show. Some shop people are completely OCD about every small thing being in place and cleaning everything up every time. I laud these efforts but I feel that it takes a certain personality type to make this work without feeling shame and guilt.

One thing I learned from Adam was the principle of first-order retrieval. Some things just need to be literally “at hand”. Meaning for me, some things need to be within reach while I’m sitting at the workstation I use for a particular task. It’s worth buying a second tool to put in a specific place so that it’s at hand immediately. First order retrieval means not having to move anything out of the way or open a drawer or pull out a bin to access the tool. Drawers are kind of drawing a fine line, I could think of a very neatly organized tool chest as first-order adjacent, but still not quite the same as grabbing it off of a tool caddy on the bench or on the wall. Speaking of bench tools, I’m torn on whether to normalize keeping tools of any sort on a work surface. It’s tempting but I think that I’m going to explore a rule that means the work surface should normally be completely open. I have my benches on wheels so I can roll them around (not that I do very often) but things like power strips and cables, monitors keyboards and mice. They all need to be corralled so that the table can actually move from the wall without one hundred things falling over or a cord brushing by and knocking everything over.

Not everything needs to be first-order retrievable. Part of the work here is to identify the 80/20 of keeping flow state. It’s impossible to keep everything at hand. The goal is to keep a work flow that maintains flow state as much as possible with some compromises. Some theories here include separate workstations for different activities and workstations devoted to specific projects. Other questions include how many active projects are optimal. I think it’s more than one. Here we get into philosophical differences in how different Makers operate. Some are focused single-project finishers while others need to have several things on the burner at once. I’m in the second category but it can get out of hand. I think there is an optimal number of active projects (I haven’t figured out that number yet though – it may take me my entire lifetime). One approach to doing projects is to order all materials you think you need up front and batch the whole project. Unless I have done the exact thing before it’s almost impossible to do. There are always some things that aren’t considered up front due to oversight or lack of knowledge. My approach here is to usually just try to get to the next step, which involves some slack time generally as I figure things out step by step. Since there is a kind of slow roll to some of these projects it makes sense to interleave things and work on several things at once. Part of this is budget. I think if I devoted more resources I would be able to over-order and just accept the fact that I will have a ton of things at the end of the project that I didn’t end up needing. This is another 80/20 situation where there’s probably an ideal mix of aggressive planning and sourcing at the beginning of a project or in a middle stage that can accelerate things without the budget getting out of hand.

A major organizational shift for me happened when I got a second workspace at my shop. I have two 20′ shipping containers side-by-side where I do my work. The first container I occupied for several years before getting the second. The second container started out as a place to build guitars and do non-metal work. The first container, or “dirty” container remained for metal and heavier work. This basic scheme hasn’t changed much and turned out to be a pretty good method of keeping things from getting totally fouled up. I run into the situation all the time where there is a tote of random stuff from a project that gets all tangled up with electronic bits, wires, tools, a sledgehammer and a giant piece of steel somehow all in the same container. There will be a tube of glue or something that gets under the hammer and leaks all over and the wire will be tangled around everything else in there. Physical organization needs to take into account size and fragility and general tangly-ness of a thing in addition to it’s type of use and which project it goes with.

Speaking of going with projects, there’s also the issue of keeping tools with project materials. At first this seemed like a revelation. Having a soldering iron along with my electronics project made it easy to pull that project out and work on it. Over time though, I started to lose track of my tools and now I had to go through a bunch of project bins to find things. This can probably work with some discipline but now I’m more inclined to “kit” tools into functional groups and pull the whole kit into a project and put it back when the project is shelved.

Kind of jumping around here, working on multiple projects means you need a way to stop and start work. This is a logical and also a physical concern. Many times I can’t remember where I left off and it’s hard to pick back up again. What was I thinking? What’s the next step? Do I have all the materials? It’s also a bad habit of mine to have multiple projects out on my workspace. Ideally I would only have a single active project out at a time on any individual workspace. It’s nice to be able to break a long welding project up over a few days by leaving it on the welding table, but if I need to weld something else in the meantime it makes things a little difficult. One solution is to make projects easier to suspend. By allocating most of it to bins or temporary storage. I still don’t have a good technique for this but I’m thinking that part of it could be calling projects finished more often. Once something gets to a certain stage of completion sometimes it makes sense to wrap it up and if I want to “continue” that project it’s really a new project that’s related to the learnings of the first project. I end up with some infinite projects that just keep going because I get new ideas or I keep moving the goalposts.

Showing progress is essential both for mental well-being as well as communication with people that might be following your work. It’s difficult to post something incomplete to instagram because you know there will be so many obvious comments but I think the benefit outwights the drawbacks. I still haven’t mastered the art of reading the comments. I think you just have to remember that comments are about the commenter not about you.

One resolution I have for this year is to not ignore friction but also not to focus entirely on optimization. I’m not good at finding this balance. It’s the classic idea where you want to write more so you find the perfect editor or the perfect blog platform, or maybe you should write your own, oh and it should be hosted on AWS because I’ve been meaning to brush up my skills there. These things need limits. They don’t have to be absolute. Just sharpen the saw a bit. Perfectionism is a killer. The key is to make durable progress. This is a thing that I’ve been terrible at over the last year. It’s been a kind of race to get things done and show progress but at the end I’m buried in a pile of tools and notes and half finished drawings. Slowing down to go fast comes to mind. It’s overall better to have a change that lasts and can be built on rather than a large change that largely gets undone or forgotten about and not leveraged moving forward. The law of compounding effort comes to mind. Getting 1% better in steps building on the previous work is like compounding interest. It’s hard to remember that when you are in the trenches of a project.

Written by newcome

February 1, 2024 at 1:17 pm

Posted in Uncategorized

The quest for modularity

leave a comment »

I was reading some of Avery Pennarun’s blog posts on system architecture and design. There was a set of bullet points that stood out to me that I will copy below:

The top-line goals of module systems are always the same:

  1. Isolate each bit of code from the other bits.
  2. Re-connect those bits only where explicitly intended (through a well-defined interface).
  3. Guarantee that bits you change will still be compatible with the right other bits.
  4. Upgrade, downgrade, and scale some bits without having to upgrade all the other bits simultaneously.

This article has some interesting heuristics as well:

State – If you don’t have state, just pure logic, you probably have a library, not a service. If you have state, but it doesn’t change with user requests what you have is configuration, either global or environment-specific. If changes to configuration are not time-sensitive, it can be a parameter to the deployment, which gets triggered when it changes. If they are time-sensitive you might need a configuration service that serves them continuously, rarely a bespoke one. If you do have global state that can change with, and across requests, you may have a service.

I have thought about this quite a bit over the years. My own heuristic revolves around units of deployment vs units of reuse. It’s kind of the same logic around static vs dynamic dependencies in C++. We make tradeoffs that might make sense initially (size vs complexity) but change as constraints change (more RAM/Disk, lower cost).

In the past I have favored library-driven development. I’ve come to learn that many times people are creating monolithic services (even if they are “micro”) that are really business logic libraries. When I architected the UberNote service tier, I started with a monolithic service, but the logic was all in a .NET assembly that was nicely organized into namespaces that would loosely be called “domain driven” today I think. If we needed to split things into multiple services, we could just use that library anywhere we needed to and the service would only call what it needed, or we could have split it into more than one dll to decouple the “deployment”.

I think that the most important part of a system design is the logical decomposition initially, and secondarily the topology (servers and pods, networks, etc). If the aspects of the logical design (domain?) are correct we should be able to arrange things in different topologies as we grow.

Some of the other things that commonly become issues are configuration, provisioning/defining environments, tenancy and maybe some others. Some aspects of the “12 factor app” are orthogonal here. I’m not talking about logging, etc.

Written by newcome

November 28, 2022 at 10:31 am

Posted in Uncategorized

Middleware everywhere

leave a comment »

I’m a huge fan of composition via middleware. Some people call them plugins, as we did back when I was working on Ubernote. We had plugins for all sorts of things. The only thing you need is a stable API and a default way of invoking things.

This turns out to be important later when considering how to do things in the Node/Express ecosystem. Express doubled down on plugins and I think it worked out pretty well. React has something now called Hooks which I think of in a similar way. It’s a way of plugging behavior into the page render flow without relying on fixed page lifecycle methods. Page lifecycles are a pain. It’s one of my worst memories of ASP.NET (viewstate, etc) and initially React had a similar pain with things like componentWillRender.

My current role involves a large Ruby codebase. I’m not as familiar with that ecosystem yet and I was reading some code that used an HTTP library called Faraday. Recently a co-worker was walking me through some code that used this library and I commented on having seen it and noted that it must be like requests or similar.

Well, it is, but it has some pretty neat features like… plugins! So we can do things like OAuth token refreshes in the client middleware. Pretty cool. Middleware everywhere.

Written by newcome

November 14, 2022 at 4:38 pm

Posted in Uncategorized

Modern C#

leave a comment »

Those of you that have followed me for a while know that my first significant industry experience was with .NET. Of course my first programming experiences were with C64 BASIC and Pascal on the Mac along with college courses in C++ but it’s always those first real codebases where one cuts their teeth.

Anyway, it was very modern at the time with a real type system and ahead-of-time compilation. A competitor to Java. I like to think that C# managed to learn from Java’s mistakes and made it a better language. That and Anders Hejlsberg is a genius.

I wrote a lot of great software in C# including my own note-taking startup UberNote. Once I moved on in the startup world, Microsoft was a total non-starter. No one was using SQL server or .NET or followed any of that ecosystem. From where I was sitting, the real issue with .NET was the community’s tendency to just clone the projects that Java did in the same way (log4NET, etc) and the Web frameworks were lagging behind significantly. MS clung to ASP for way too long.

Fast forward to some recent conversations. I joked sometimes that I would bring Clojure in as a language, which is based on the JVM. Later on some folks turned that joke around on me with C#, and I asked “why not”? Turns out that a lot of other devs around me have significant experience with C#. C# influenced the design of the now-popular TypeScript language. MS open-sourced a bunch of the .NET Core runtime. I wonder if it’s becoming time for C# to be really unchained from MS and come into its own?

Looking at the modern language landscape, the most interesting things I see now are Rust and Go. Maybe some things like Scala. I really don’t know what can dethrone Python right now. I was on the dynamic language bandwagon for a long time and was a champion of Javascript the whole way through to today where it’s one of the most modern language ecosystems (yes there are warts but the rapid pace of development of the language is IMO unmatched – it reached true internet-scale development). TypeScript is in a place where it could take over as a systems language, but I fear that its roots in Javascript will forever hamstring its chances. Maybe with Deno this will change?

What is best systems language to pick right now for back-end distributed systems? It’s kind of crazy to me that Python was the winning horse for so long. C++ was dethroned long ago due to sheer complexity. I think I need to learn Rust and do some tests with .NET on Kubernetes before I make any judgements.

Written by newcome

October 10, 2022 at 9:42 am

Posted in Uncategorized

Process stability and Software Engineering

leave a comment »

I’m reading some of Edward Deming’s writing and I can’t help but try to draw parallels between manufacturing fundamentals and statistical process control and the software development lifecycle.

Deming was able to show scientific ways of managing hidden variables without trying to make wild guesses about systems up-front. I have seen management systems come and go in Software Engineering and seen backlash against things like the Capability Maturity Model and various ISO certification systems. We ended up coming back to simple things like Agile. Why?

I think agile has become a sort of cargo cult, but some parts of it fundamentally address the right questions about process viewed through the lens of statistical process control. The concept of measuring velocity is a key part, as is tracking estimation errors. I think where the cargo cult comes in is the assumption that the goal is to have the maximum velocity for a given sprint. I’d argue that the more valuable goal is to achieve a stable velocity over time, even if it seems slow.

Complicating matters a bit is that software development is a double-production process. Artifacts are produced which in turn produce other behaviors and further artifacts. Now we are talking about another concept that is reliability. How does reliability relate to quality and to process stability? I think that there are statistical answers to this question.

I was listening to the CyberWire Daily podcast where one of the hosts was discussing cybersecurity insurance. The thesis is that threat models are so complex now that they can’t be calculated classically/deterministically. However, statistical models are very powerful and are able to find very good solutions to otherwise intractable problems.

This is going to be a short post, but hopefully it sows some seeds for me to return to in depth in a later series of posts.

Written by newcome

March 23, 2022 at 8:14 am

Posted in Uncategorized

All code is legacy code

leave a comment »

I was doing some one-on-one meetings with some folks on my dev team and my colleague showed me an informal document that he was keeping. Sprinkled throughout were references to legacy code and proposals to move to different technologies.

I wanted to dig into some specifics but I casually mentioned that all of our code is legacy code. Code is kind of like copyright. It’s legacy once it hits the disk (“fixed form of expression”). I can’t recall exactly where I picked this up but it’s a distillation of an opinion that I’ve formed during my career working with codebases at varying levels of maturity. I’ve seen cheeky references to legacy code as “any code that works”.

I’m part-way through reading an excellent book about legacy systems entitled “Kill it with fire” https://www.amazon.com/Kill-Fire-Manage-Computer-Systems/dp/1718501188

I will have to read back and see if I can trace it back to this book. Either way you should read it (Disclaimer: I have not finished it).

Responses to my comment in the one-on-one have been trickling in from the rest of the team. Everyone seems to have interpreted it a little differently but I think that each interpretation actually embodies a facet of what I’m trying to convey. The context of my utterance was a system that we needed to keep up-to-date so it makes sense to treat the system like a flowing stream. It’s going to have some dynamics all on its own, and to make sure we can guide it we need to manage it. (Software is performance? Software development is a generative system? Too much for this post).

Managing the entire lifecycle of code from the start means treating new code as legacy. Sounds crazy but it makes you think about your dependencies, are they well maintained? Is the code using a build system that is different than the rest of the system? What about testing plans (not the unit tests themselves – the strategy for maintaining quality over time).

The thing I didn’t expect was that it triggered a revelation in one of the other Senior Engineers that the decisions and technologies are always up for discussion. A codebase at rest can be an archaeological dig at worst and a collection of Just-So stories at best. It’s encouraging that this is getting people to ask why. I think that’s pretty amazing.

Written by newcome

March 22, 2022 at 7:51 am

Posted in Uncategorized

Immediacy

leave a comment »

The last few years I have been going out to a festival in the Nevada desert. You can probably guess which one but it’s not really important for this discussion. One of the core tenets or “principles” of that gathering is embracing immediacy. In the context of the festival that generally manifests itself in a kind of a fractally recursive series of pleasant distractions. One of our rules is if someone asks “what is that?” we have to go find out.

Naturally in the real world this lack of prioritization can lead to trouble. However, I have begun to explore the idea of optimizing for immediate action. There is rarely a better time to do a thing than the current moment. Unfortunately this comes at the cost of potentially missing high priority tasks that just don’t happen to be immediately in front of us.

There are several “triage”/inbox zero type of strategies that rely on one pass over potential tasks to see what can be knocked out in 5 minutes or less. My approach here is to try to optimize for speed so some things can be approached in smaller chunks of time.

Digressing a bit here – a useful thing done in a small amount of time is also dependent on there being a nice stopping point for the task. So really there are 2 things to optimize for. Quick to start and quick to stop. Stopping is important because the gains can be lost if the exploration is forgotten (if it is an info-task) or the intermediate result is misplaced only to be done again next time it’s immediately before us.

Before I do any more explaining I will give an example. Say I am looking at a broken power jack on a circuit board I’m holding. I think, ok just need to unsolder this thing. I have more barrel plugs in a little drawer in my organizer. The soldering stuff is in a plastic bin in a stack of bins. What do I have to do to accomplish this task? There are move moving parts than one would think. I have to put the board down somewhere. That might be a challenge if my bench isn’t clear. I need to pull the soldering bin out from under some other stacked bins potentially and figure out where to put it while I pull out the desoldering iron. I have to find a place to plug it in.

Immediacy can come into play in many places here. One is realizing that a good place for a power outlet is right in front of the bench, maybe mounted to the leg of the bench. Having a ready supply of double sided tape would be something that could enable this. The ability to slap a power strip right there and use it right away for this task is what I’m talking about. It should take less than 5 minutes to evaluate whether it’s a good idea, and if it works out it’s much easier to remember and maybe I’ll use screws or something more permanant. Kind of like paving the worn paths across the grass.

Other potential things in this example are, “where do I put the hot iron?” I have a little soldering iron holder on a clip now. The things that led to this was my cat knocking something heavy onto my soldering iron holder, breaking it off of the base. I delayed throwing it out since I didn’t have anything else and figured I would fix it eventually. However it was broken on the metal and I was thinking I could weld it so fast but I had to remember to take it to my shop, which never happened. I had some steel wire and started looking for somewhere to lash it, and I had a binder clip lying around and the idea worked and stuck. Pretty contrived, but how to enable more immediate actions like this?

One touch systems. I did the clip-on soldering iron holder while the iron was hot in my hand. This is what I would call “one touch”. I was already holding the problem. Another example of this would be washing a pan before letting it go after serving out the food it contained. It’s probably not going to get any easier to clean after it’s cool.

Written by newcome

December 5, 2021 at 2:09 pm

Posted in Uncategorized

Go to the H1 of the class

leave a comment »

I’m refactoring some HTML right now and it occurs to me that headings are problematic for non-frontend devs and inconsistently used at best among frontenders.

When I was at Yahoo a big part of the styling philosophy revolved around Atomic CSS. These were the kind of predecessor to React’s JS-like inline style declarations. Styles were generated by a post processor based on descriptive short class names. Eg, display: inline-block becomes D-ib for example (not the real syntax anymore but similar idea). The compiler would generate a the stylesheet to make the class name work, eg

.D-ib {display: inline-block}

The reason I’m bringing this up is that headings typically have some styling associated with them. The base stylesheets of browsers define them, and they are block-level at the very least. Most CSS template systems further define them and/or do some kind of “reset” that redefines them.

At a basic level, heading tags are semantic, telling an agent at what level in the hierarchy of information this markup resides. That’s basically it. I’m coming to think that in the age of components we need to treat headings as relative and purely semantic. On one extreme I see developers using just div everywhere. This gets around the problem of default stylings but throws away any attempt at semantic hierarchy. Maybe they sprinkle in some non-standard class name scheme for that.

Ideally H tags would have no styling different than div. Components always start with H1 at their top level, and would be rewritten during page composition to the appropriate depth based on “root” heading level. Styles are applied separately. Possibly inverting the “additional semantics” pattern by supplying “additional styling” even using .h1 class names. This seems like an anti-pattern at first but then some common trends in React styling started as anti-patterns as well.

Written by newcome

July 3, 2019 at 9:39 am

Posted in Uncategorized

Simple Ubuntu Desktop with Vagrant

leave a comment »

I wanted to spin up a Linux development environment to hack on some code that needed epoll. I could have run everything in a Docker container, but I kinda wanted to be in that environment for total hackage.

I thought maybe I’d just do it in a Virtualbox instance. Then I didn’t want to install Ubuntu or anything. Then I realized that Vagrant is supposed to solve this problem. Installed Vagrant and used Ubuntu Trusty – https://app.vagrantup.com/ubuntu

$ brew cask install vagrant
$ vagrant init ubuntu/trusty64
$ vagrant up

Then I realized I wanted a minimal desktop.

So googling.

Yep, XFCE FTW. https://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment

Then install chrome. It’s easiest to SSH into the instance and past this stuff in through the native shell.

$ wget -q -O – https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add –
$ echo ‘deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main’ | sudo tee /etc/apt/sources.list.d/google-chrome.list
$ sudo apt-get update
$ sudo apt-get install google-chrome-stable

 

In Virtualbox

$ startxfce4&

Written by newcome

June 25, 2019 at 3:27 pm

Posted in Uncategorized