I have, once again, failed an interview (probably). I don’t actually know yet but if I were the interviewer I probably wouldn’t recommend moving the candidate forward. Based on the hyperbolic title, you are probably guessing that I’m pretty salty about failing, but I’m not. I write one of these reflection blog posts each time I fail an interview I care about (fun fact: both were about the same company, enjoy the bonus round at the end). Now I’m writing one after experiencing a few years at a company that didn’t have any coding interviews (fun fact: I almost didn’t pass their interview process either).

Now, I’m just as guilty about being on the coding interview bandwagon as everyone else. I’ve given probably about a hundred of the same formulaic1 interview that I just took. I’ve passed on most of those candidates and I would have passed on me. I’ve also spent hours fixing code that had issues directly stemming from not using the correct data structure/algorithm that I may not have had to if we had been better at screening the employees during the interview process.

So which one works better? From my small sample size and general mediocre statistics abilities, I’d say they are about the same. What I do know, however, is that for every 1-hour interview where I evaluated if someone knew their data structures, I could have just taught them. Maybe they did know them already and just forgot because they haven’t used them recently. Math was one of my strongest subjects in school but if you asked me right now to take a derivative of something I wouldn’t be able to.

For some reason we think that someone needs to be able to whip out the solution to any random problem that is space and time optimal within 40 minutes. Ok, that’s not necessarily fair (more hyperbole, I know). I’m well aware that it’s often said that what’s more important is how you solve the problem, not that you solve the problem, but in practice human biases will work against you if you don’t get close to it working.

If you’ve stuck around this far, you are probably curious about the question. Honestly, the question is not interesting at all and I thought it to be perfectly reasonable (and was probably the first of two). The reason I’m writing about all of this is because I woke up to the epiphany that the problem I failed was a problem I solved a year ago.

Q: Assume that you have an infinite stream of data that is coming in from multiple threads in an unordered fashion. Write the stream items to the console in order.

My actual problem, which took about a month, was that I needed to ingest all of the changes from one or more tables in SQL Server as fast as possible. What I chose to do was hook into the Change Data Capture functionality and simultaneously stream the data from CDC while scanning the table for snapshots of the data. Now, while each of those go in sequential order, we need to reconcile the two together depending on which we read first (ie if we get updates before snapshot, we need to store them until we get the full row data and then send them). On top of that I then built an in-memory transaction layer sitting on top of SQLite to optimize memory/disk usage to hit the performance benchmarks I wanted.

This literally checked off every box they were looking for (even the ones I missed) in the interview question. Thread safety: check. Lock primitives: check. Optimal Concurrency: check. Producer-consumer: check. Fault tolerant: check. Hashing: check. The code I wrote is ok at best, honestly. The greatest lie we have ever told ourselves is that we want greenfield projects because we won’t have to deal with legacy code, but legacy code is just greenfield code that is written under the duress of trying to solve the problem at the same time.

So now, just as when I’m in an interview, I’m at a loss for the answer to the question at hand: what should we do instead? I honestly don’t know but I do feel like we are stuck in a cargo-cult mentality where we are just doing things becaues that’s what the big companies do, and if it works for them it must be what we need to do. I understand the problems associated with hiring the wrong people as well, which may well be the actual reason we are rightfully stuck with such fearful/timid hiring practices.

For now, I’ll just keep practicing for interviews until I successfully trick someone into thinking that I know how to code and then secretely become one of the best employees they have ever had.


Bonus round! I guess enough time has gone by (11 years!) that I can say that both of my failed interview posts were from when I applied to StackOverflow. The first time I bombed out just as hard as this interview. The second time, however, I went through 2 - 3 coding interviews (which I believe that I passed), 1 interview with the product manager, and then finally an interview with David Fullerton and Joel Spolsky. I completely bombed the interview with David Fullerton and I’m pretty sure that’s what killed my application (but I can’t say one way or another).

That interview, however, is the only interview that actually bothers me to this day. The question, which I’ve heard is a Facebook favorite, was “convert a decimal number to base negative 2”. This question, which is possible/doable in 40 minutes and more about the problem solving process than anything, is the dumbest fucking question I have ever had and I don’t care what anybody else thinks. Every other question I’ve had I’ve never had problems with because, like I’ve shown in this post, most everything you’ll actually run into at some point. And yes converting numbers to other things (roman numerals, etc) is a valid thing, but fuck that question and just waiting for you to eventually arrive at the little trick to make it work.

Because of my schedule, the interviews were stacked back-to-back, where they normally would not be and I wouldn’t have actually ended up having the interview with Joel, which was so awkward because he was a little late and so I was just sitting there on a spinning google hangouts waiting while my head was still trying to process the question while knowing that I had completely bombed that interview and the next interview didn’t matter, not actually knowing if it would happen or not.

No hostility to any of those involved, but to this day I hate that question and it’s the only question I’ve received that I would actually categorize as “gimicky.” RIP StackOverflow Careers.


  1. 5 minutes on introductions, 40 minutes on the question, 10-15 minutes for questions. 

My current project at Homesnap involves breaking a monolith application into multiple microservices and part of that is moving a large amount of data to that new microservice by way of feeding the data through our new API.

While I’m able to get reasonable throughput by firehosing the API with data from my machine, it simply doesn’t have enough throughput to move the volume of data necessary in any reasonable amount of time, so I decided to queue the data using Amazon’s Simple Queue Service (SQS) with a Lambda function being triggered from data being written to the Queue. In theory, this allows me to scale out the data ingestion using Amazon’s capacity rather than me finding more and more machines to run my import utility on.

What I found, however, was that Lambda, with 80 concurrent executions sending 10 requests per batch, could barely outperform my single machine. My setup was pretty basic: a stateless, HTTP-based API behind an AWS Application Load Balancer (ALB); I would expect Lambda would achieve linear growth until the database’s resources were exhausted. So I began tinkering until I got the performance I was expecting.

1. Sticky Sessions

While I wouldn’t expect sticky sessions to be necessary (since my application is completely stateless), I wasn’t able to get reasonable performance from my machine or Lambda to my API through ALB without sticky sessions enabled. While this improved performance significantly on my machine, Lambda still suffered, so I kept digging.

2. Get node http to use sticky sessions

My Lambda function was just a simple node function that was using the node http functionality to send the requests to the API. However, unlike client-side javascript using the fetch API, the http library doesn’t automatically store / send cookies for the request. Thus, while sticky sessions were enabled, the Lambda function was never benefitting from those sticky sessions! Fixing this fact is fairly trivial, once you know what needs to be done.

When Lambda loads your node function, it executes your script once, but your exported handler function repeatedly until the process shuts down. We'll exploit this fact in the code snippet below.
const http = require('http');
// initialize our cookie to an empty array
let cookie = [];

function sendRequest(data) {
    return new Promise((resolve, reject) => {
        const request = http.request({
            host: 'example.com',
            path: '/api',
            method: 'POST',
            headers: {
                // Pass our cookies to the request
                "Cookie": cookie
            }
        }, (response) => {
            // when we receive a response, store the cookies returned from the server into
            // our cookie variable. note that a cookie is a semi-colon delimited string of properties
            // but the only part we want to send up to the server is the first part.
            let cookie = (response.headers["set-cookie"] || []).map(v => v.split(';')[0]);
        });

        request.end(data);
    });
}

exports.handler = async (event, context, callback) => {
    await sendRequest('{ "hello": "world" }');
};

The result? My function went from ~2,000 invocations per minute to ~9,500 invocations minute and the maximum duration of my function dropped from ~22 - 28 seconds to ~6 seconds. Additionally, I was able to reduce the number of docker images running in the cluster from 20 to 6 while sustaining the same throughput. All in all, everything is running faster with lower cost, which makes me happy.

Hopefully this can help you if you are in a similar situation.

Rob Conery has released a new book called The Imposter’s Handbook for those in the software industry who don’t have a strong background in computer science fundamentals. I haven’t read the book so I can’t comment on what it covers, but the concept has me reflecting on my own similar experience.

While I did attend some university, I only finished about half of my computer science degree before I dropped out. At the time, I didn’t think I had missed out on anything because I had already taught myself everything I had encountered in school up until that point; I was always a semester ahead of my course work.

I realize now that I was pretty close to the promised land of computer science because I had just hit Big-O notation though at the time I thought it was completely pointless. At the time, I was somewhat correct, but looking back now I realize how wrong I was.

This idea still resonates with many other self-taught developers though. The easiest place to see this is the criticism of technical interviews and their focus on rote data structure / algorithm questions. Often the counter-argument is “I don’t need to know this because I can just google it”, which is a fairly valid response.

So why, exactly, should you care about computer science if you are already a good developer? Well, you don’t, to an extent. You can be a succesful developer, at least for a while.

I was successful for years before I started diving deep into computer science on my own. I wasn’t some prodigy, I was just riding the effective application of computer science through my use of a DBMS. In fact, a large number of developers can, and are, successful because those fundamentals are baked into the frameworks that they are using.

At some point, though, you will start hitting scaling problems as you continue to be successful. Often times, the scale may not be as much as you would like. Sometimes, it will happen because you finally land that big client.

The classic example of this is you have a component that has some sort of double loop in it that works great with all of your existing clients who have a hundred items of something, and then you get that client who has a thousand, perhaps even ten thousand items, and everything suddenly starts falling over.

This is why many companies focus on ensuring that developers have a good grasp of data structures, algorithms, and their space / time complexity analysis during the interview process; they have hit the point where they can’t scale further without doing things right. Furthermore, they can’t afford to have people writing code that, when released, will immediately fall over under load.

So, if you’ve made it this far, I now make an appeal to you, reader. If you don’t have a strong background in computer science, start today. You don’t need it, but it can only make things better. Now is the greatest time to do such a thing because there’s never been better access to that information. Whether you read a book, take an online course, or read blogs, there’s a multitude of information out there for a reasonable price.

Recently I was working on reducing some redundancy in an ASP.NET MVC application and ran into an error when trying to conditionally render a section. ASP.NET was not happy and gave me the following error:

The following sections have been defined but have not yet been rendered for the layout page “~/Views/Shared/_Layout.cshtml”: “ProductionOnlyScripts”.

On this layout page we have the following code.

@if (ApplicationConfiguration.IsProduction)
{
    @RenderSection("ProductionOnlyScripts", false)
}

I never expected that if I defined a section that MVC would require me to render it. I can kind of understand why, but I disagree with this design decision. Lets run down the options we have with sections in general.

  1. Calling @RenderSection(name) when the section has not been defined will throw an error. This is the right thing to do.
  2. Calling @RenderSection(name, required: false) when the section has not been defined will not throw an error. This is the right thing to do.
  3. Not calling @RenderSection(name) or @RenderSection(name, required: false) will throw an error if that section has been defined (as is what we have observed).

So, the work around to this is simple: we render the section to nothing.

@if (ApplicationConfiguration.IsProduction)
{
    @RenderSection("ProductionOnlyScripts", false)
}
else
{
    RenderSection("ProductionOnlyScripts", false)?.WriteTo(TextWriter.Null);
}

In the end, ASP.NET is tracking the section fragment and ensuring that we are writing out it’s contents. What it doesn’t know is that we are just writing that to /dev/null effectively so that the content will never make it’s way to the browser.

This can be extended as well into an extension method if you find yourself doing this often.

The only real mistake is the one from which we learn nothing.

John Powell

Let’s make one thing clear: being a candidate in an interview is hard. Over your career you will likely participate in many interviews as a candidate, and I can assure you that you likely won’t pass every single one. You aren’t alone here, I myself am no stranger to failed interviews. You can, however, make sure that you are getting something out of every single interview: a new job, exposure to a new problem, a new way to solve a problem, or a better understanding of a problem.

As I have been interviewing candidates recently for a software engineer position at DevResults, I’ve been thinking about what I would do if I had been the candidate in these interviews and I’ve compiled my current thoughts into a few general tips. Some of these are technical interview specific, but the majority are generally applicable to all interviews in general.

Failing a problem is an opportunity to learn

One interview I participated in I was asked to return all permutations of a string. At the time, I had never had to do something like this, and needless to say I didn’t do well on it. After the interview I took the time to really understand the problem and strategies for generating permutations in general, mainly to ensure I wouldn’t ever fail that question again.

Fast-foward to last year, I found myself in a scenario where I did in fact need to generate all possible permutations for a set of data. It made me really appreciate that I had been exposed to that problem in the past from the interview and it was pretty trivial to apply what I had learned to solve the problem.

Most of the time, however, you’ll probably see something you’ve encountered before, or some variation of a known problem. In scenarios like these, there’s often multiple ways to approach a problem and if you didn’t quite get it, the interviewer may explain the solution to you or you can ask what the solution is. If you are able to get the solution to the problem, definitely take the time later to make sure you understand the solution and why it’s the right solution.

If you actually got the solution, but in a non-optimal way (perhaps your solution had poor runtime performance or poor memory usage), be sure to note the problems your algorithm had and research solutions that address those issues.

Ask for feedback

Don’t hesitate to ask for feedback or suggestions on how you did in an interview, but keep your question focussed on your performance. A lot of the time, you won’t get a response for various reasons like company policy prevents the interviewer from giving any feedback to you. Any feedback you are able to get, however, can be very important.

Every interview I conduct I block off an hour and a half which is broken into three parts:

  • The first 10 - 15 minutes I go over the job description and get to know a little more about the candidate
  • The next 45 minutes is dedicated to the candidate answering technical questions
  • The remaining 30 minutes is open question time for the candidate

The vast majority of the time candidates spend about 10 - 15 minutes asking the usual questions like what kind of source control we use, how we do deployments, etc. The other day, howerver, one candidate took full advantage of this time, which I always explicitly say is open Q&A. The candidate asked for feedback on their resume, how well they were communicating while they were coding, how well I could understand them (they were a non-native english speaker), and many other questions.

The insight was extremely valuable for them personally and likely not something any company policy would bar. Before and after each interview, think critically about what things you struggle with and try to get feedback from the interviewer on how they think you did in those regards. This will help you do better in all subsequent interviews.

The only way to get better at interviewing is to do interviews

This seems like common sense advice, but it really is true. If you are having a hard time with interviews, you just need to do more of them. Generally, people only apply to companies or jobs that they are really interested in and waiting to apply to other positions that they would likely be happy with but definitely not their first choice. This is fine, but it can be really demotivating when you don’t make the cut for positions you are really interested in.

When I’m job searching, I tend to take a balanced approach and apply to both types of positions, which allows me to practice interviewing and reduce risk of failing an interview with someplace I really want to work at just because my nerves got to me. It would be unethical to waste someones time by applying to positions you wouldn’t ever want though, so only apply to jobs you can see yourself being happy in.

The last time I went through the hiring process it actually became a little complicated in the end because I ended up facing a situation where I would have to choose between a job I thought I would really like and a position that when I had originally applied I thought I could be happy with but I knew I really wanted after I had met the people involved and learned more about the mission / vision.