Our plan is to continually experiment with new ways to measure technical ability, that aren't as adversarial as most standard practices today. We'd love to hear any ideas for more experiments we could run.
Something I've been thinking about for a long time - I'll add this here because you and others at HN are more likely to be able to act on it than I am.
It seems to me that hiring teams of programmers that are known to work well together and have proven track records is a smart idea. The reason is that hiring individual programmers, who may or may not get along with one another, and whose individual hire decisions may or may not cover the breadth needed or may result in redundancies - is an inefficient and strange practice.
What if you needed an IT department and could hire groups of programmers who have, collectively, proven track records of managing IT well? Need a website done? Yeah, you could hire a designer, a backend server woman, a database guy, a CMS guru - and hope they all get along well and work together.
Or we could imagine hiring a team of people who manage themselves to get the result you're asking for.
> We'd love to hear any ideas for more experiments we could run.
As much as possible, persist the interview results per-candidate and make them reusable so people don't have to re-do the same work over and over and over again. One of the worst parts of interviewing is answering the same basic CS 101 question in every interview because everybody reads the same "how to interview a programmer" post written 15 years ago. Yes, 22 year old interviewer, I know what a compiler is, what an open source license is, what linux is, and how to write a linked list. Also, I've written code for NASA, but you have no basis to evaluate that against your limited knowledge and life experience, so keep asking things you learned two years ago.
I looked into doing something like this a while ago. The long term goal is obviously to become a candidate marketplace where you hold all the quality cards and charge a fee to companies who want a pre-vetted, guaranteed ("our programmers are warrantied against hip dysplasia!") employee pool.
The end result is essentially credentialing/certification where companies only want to hire people "Certified by X." Then, those people can also job hop easier because they are "certified" and don't have to repeat the same interview process everywhere again. Mild speed bumps in credentialing end up being: re-certification (sure I know all this now, but I may not remember any of it 6 months later), bleeding-edge certification (do you even react-goober-swift?), and maintenance certification ("I know this 20 year old technology nobody else knows!").
The reason special people will run away screaming: we don't want to be treated as interchangeable cogs. We are unicorn pony snowflakes. We want our $30k/week contracts and don't need to be evaluated by another programmer's static scripts pseudo-determining what they think is a fair evaluation of our bespoke, artisanal knowledge and abilities.
I think the idea of judging people primarily on strengths is key. But I think you actually understate this, when you say on the FAQ that 'Everyone's bad at something' -- in the sense that the amount of skills, knowledge, and abilities that any one person lacks is essentially infinite.
People analytics is a fascinating problem and I think it's high time someone tried to introduce concrete data to it. Several companies are tackling it in their own ways.
At http://InterviewKickstart.com, for example, we're starting to find some interesting correlations between personal confidence-level of an engineer coming in, and their employability. Yet another correlation is between academic achievements and employability. Small data, but interesting still.
Confidence helps you build a rapport, which is what interviewing is.
Academic achievement, however meaningless that GPA number is, is still a proxy to you working hard for a goal (for the most part). Job search is a goal.
Have you any plans to extend beyond the Technical assessment to help quantify "a good fit with the mission and team"? I'm working with a couple of companies doing interesting things in that behavioural space - and would be fascinated to see outcomes in that field from your data-driven approach. Good luck to you Guillaume and Ammon.
Thanks! We're going to focus on technical assessment for the a while because 1) it's already a big challenge itself 2) it feels more quantifiable than culture and mission fit. We're going to let the companies decide on those things for now, but we definitely agree all parts of the process could benefit from more data.
> First, track decisions as quantitatively as possible
First, doesn't every single programmer based ATS do this currently? Second, as soon as this quantitative information becomes symmetrical (it always does, Google "google interview prep"), how do you prevent coders from gaming ("hacking") the system?
I know you are just getting started but it would great to hear if you plan to extend this to technical management and different skillsets that would entail.
We'd definitely like to expand to all skill sets. I think startups often underestimate how long it takes to get one thing completely right though, so it wouldn't surprise me if just getting really good at identifying great technical skills takes a long time!
Our plan is to continually experiment with new ways to measure technical ability, that aren't as adversarial as most standard practices today. We'd love to hear any ideas for more experiments we could run.