Irrational Exuberance for 06/24/2020
Hi folks,
This is the weekly digest for my blog, Irrational Exuberance. Reach out with thoughts on Twitter at @lethain, or reply to this email.
Posts from this week:
-
Trapped in a Values Oasis.
-
How to practice backend engineering.
-
Stuff I've learned about Diversity, Equity and Inclusion over the past few years.
Trapped in a Values Oasis.
Learning to influence without authority is the keystone leadership skill to transition from early to mid career. It becomes an even more important skill later in your career as you need to partner effectively with your peers, executives and board members.
One of my favorite approaches to influencing without authority is “Model, document and share”, which focuses on enacting changes within your aegis of authority, while making sure it’s easy for others beyond your authority to adopt your changes. Having operated this way in senior management roles at several companies, I’ve come to believe it’s a uniquely impactful way to effect change.
I have also come to appreciate how using it in the wrong circumstances can create a misalignment of values within an organization, that causes a great deal of friction. I call those pockets of values misalignment a Values Oasis, and want to talk a bit about why they’re a problem, as well as how you can avoid creating them.
Leaving the oasis
A few years ago, I heard an apocryphal story about Sheryl Sandberg’s departure from Google to Facebook. In the story she apologizes to her team at Google because she’d sheltered them too much from Google’s politics and hadn’t prepared them to succeed once she stopped running interference. The story ends with her entire team struggling and eventually leaving after her departure. I don’t know if the story is true, but it’s an excellent summary of the Values Oasis trap, where a leader uses their personal capital to create a non-conforming environment within an wider organization.
An example that I’ve seen in many companies is the extent that teams factor community and inclusion work into performance reviews. Some teams rate performance on pure business metrics, others combine these factors additively, others discount business performance if certain community work minimums aren’t met. You can operate an organization with any of these approaches, but it’s risky for your team when you operate an organization that applies these standards inconsistently.
If your organization starts to emphasize community-building work within performance reviews, you may end up rewarding and promoting folks who would not get rewarded or promoted in another peer organization where that work isn’t similarly valued. At some point you will end up in a room where you have to compare the performance across both those organizations, and this is where the Values Oasis creates friction.
In a sufficiently influential role within your organization, you can ensure that your set of values is fairly accounted for in this sort of values melting pot, but what happens once you’re not in that room anymore? If your organization’s values have diverged considerably from those of the broader organization, then it’s likely to be messy.
Perpetuating a Values Oasis is betting your team’s long-term success on your own, and recognizing that ought to shift your ethical calculus. Even when you believe fervently that your values are better for your team, it’s not necessarily an altruistic act of leadership to adopt them if you can’t bring the broader organization along with you.
Ambiguity vs disagreement
When you come across a missing process, this is a great time to lead your organization forward by modeling an effective approach. For example, Julia Evans’ approach of writing brag documents is perfectly shaped to fill a gap that most organizations have. This is the right time to use a technique like “model, document and share.”
Conversely, when you encounter values or processes that you disagree with, modeling a different approach creates the seed of a Values Oasis. For example, if you disagree with how calibration weights community-building efforts within your organization, then modeling a different approach will either create a Values Oasis or demonstrate you failing to commit as a leader.
The rule of thumb here is to lead through ambiguity, and advocate through disagreement.
It’s important to diagnose your situation correctly, because when you get it wrong, it’ll still feel like you’re making progress, but it’s wholly dependent on you and it’s progress that is likely to come at the cost of undermining both you and your team within the broader organization. It can be extraordinarily frustrating to “disagree and commit” to a policy or value that goes against your personal values, but any worthwhile measure of successful leadership needs to consider your team’s success more highly than your own.
If you’re willing to sacrifice being a visible advocate to become an effective advocate, you can make durable, meaningful change over time by advocating through the disagreement and leading through the ambiguity to create an organization you believe in. Until you’re effective with both approaches, each oasis will dry up shortly after your departure.
How to practice backend engineering.
On a recent call, I chatted with someone about backend roles in software engineering, and what folks actually do in those roles. More than just what do these folks do, how would you practice for this kind of role or prepare for interviews?
Roughly the sorts of work that backend engineers are asked to take on versus the work that any engineer might be asked to take on, three categories of tasks stand out to me as being both frequent and practicable:
- Modeling and remodeling data - how do you design an effective data model for your application, and then evolve that data model as requirements shift over time?
- Designing and evolving interfaces - how other components integrate with your service?
- Integrating with APIs - how do you integrate your application with 3rd party APIs like Twilio, Stripe, and so on?
- Scaling capacity - how do you evolve your architecture to support more load over time?
At the bottom of this post I’ve collected some books and blog posts for each of those that may be helpful if that’s how you learn, but I also wanted to put together a project that folks could use to practice these.
Preamble on learning projects
Before you get started on the project, a few general notes on what I’ve generally found makes projects like this effective:
- Not the only way to learn - I want to start by caveating that these sorts of learning projects have always worked well for me, but there are many different ways to learn these sorts of things, and this one is particularly time intensive
- Narrow your focus - don’t try to learn a ton of new things at once. For example, if you’re focused on learning about integrating with APIs, then use a programming language you’re already comfortable with. This is particularly important for backend and infrastructure-style projects because you can spend your entire time trying to get Dockerfiles or Vagrant configurations work and never get to the actual learning you care about
- Use a source code repository - use a tool like Github to store your code so that you have examples to go back to over time. Working code examples that you understand are an amazing debugging and refresher tool
- Use an ephemeral environment - it’s totally fine to work on your laptop, but if you’re able to use a cheap service like DigitalOcean Droplets ($5/month) or Amazon Lightsail ($4/month), you can avoid spending time fixing your local environment and you can just delete everything and start over if something goes particularly wrong. Glitch is also a great option, although you wouldn’t want to use it for the scalability practice
Project definition
For the project itself, I’ll outline a series of steps to take along with the intended learning from each step. This is intentionally a bit vague for you to play around with.
- Scaffolding - getting the pieces ready
- Create a repository on Github (or your code hosting of choice) for this new project, and setup an HTTP server using the framework of your choice. If you’re using Python, that might be Flask, add an endpoint at “/”
- Within your repository, create another directory named “client” which holds an HTTP client to call your service. If you’re using Python, you might use the requests library. It should be able to call your your “/” endpoint and print out the response
- Add a database of your choice to the HTTP server, add a table to it and start writing every request to “/” to that table. You might use SQLite or MySQL or PostgreSQL
- Evolving data model and interfaces - evolving an existing application
- Update your server to offer two endpoints, one to send a message and another to retrieve recently sent messages. The API should support specifying the number of recent messages to retrieve
- Update your client to use those two endpoints. The client should be able to send and retrieve messages
- Update your server to return messages sent after a point in time. For example, “all messages since 10AM this morning”. This will require adding a new column to your data model, and to store the time created for new messages. You’ll also have to figure out a plan to migrate the existing messages forward. Do you default to the current time for existing messages? Try not to just drop the existing table, migrating the data is an important part of this
- [Bonus task] Add an API that allows you to respond to an existing message, and support returning all replies to a message along with that message. Update your client to render replies differently so you can tell which messages are replies and what message they are a reply to
- Integrating with external APIs - add another API
- Add a Twilio integration which allows you to text a message and for it to get added as a message in your service the same as if you used the client to create the message
- [Bonus task] Create a Slack app which allows you to send and retrieve messages to your server using Slack. I wrote up some notes last year of doing something similar
- Scaling capacity - how can you evolve your server to support more load?
- Download Locust and set it up to create load against your server. Setup three different load tests: one that does only reads, one that does only writes, and one that does a mix of 50% reads and 50% writes
- Run the “reads only” load test against your server. How much scale can it tolerate? How can you figure out where you’re spending the most time (hint: try searching for “performance profiling”)? How can you modify your server to support more load (hint: one simple initial strategy might be an in-memory cache, but make sure to think about cache invalidation)?
- Run the “writes only” load test against your server. How much scale can it tolerate? How can you protect the overall stability of the service against too many writes (hint: try searching for “ratelimiting”)?
- Run the “writes and reads” load test against your server. How much scale can it tolerate? What other techniques could you deploy to scale it up? Is it slow in the server or in the database? How do you know? Write a list of things you’d use to identify which is the case and how you could address it. (Actually making these sorts of fixes might lead you down the path of spending more money on hosting than you want to, so it’s fine if you don’t implement them!)
Completing all of these steps ought to give you a fairly representative look into the lifecycle of creating, evolving and maintaining an application from a backend engineering perspective. If this feels too easy, try introducing new elements like a new database, more kinds of load, more complex new requirements for your API to support, and so on.
Resources
Beyond this sort of practice, some resources that might be helpful:
- Designing Data-Intensive Applications - this book is all the rage lately and comes highly recommended from many folks for introduction to designing and scaling applications.
- Introduction to architecting systems for scale - a blog post I wrote to provide a summary of web scalability techniques.
- Acing Your Architecture Interview - another blog post I wrote discussing strategies to use in architecture interviews.
- Building Scalable Websites - this book is fairly dated at this point, but it was my first entry point to web scalability and I found it very approachable if you’re looking for a quicker read than Designing Data-Intensive Applications.
- Web Scalability for Startup Engineers - I haven’t read it nor am I familiar with others reading it, but from my quick research and the reviews this seems like an updated version of Building Scalable Websites, which might be worth checking out.
Stuff I've learned about Diversity, Equity and Inclusion over the past few years.
When I wrote An Elegant Puzzle, I wanted to document some of the structured ways I’d learned to foster inclusion within the organizations, which surfaced in a number of sections, including Opportunity & Membership, Selecting project leads, Inclusion in the first shift, and Work the policy, not the exceptions.
Those pieces continue to reflect my values, but they often operated on an aspirational level without acknowledging the grittier, more ambiguous layers beneath the ideals where you spend most of your time attempting to effect change. In these notes I want to focus on what I’ve seen work over time.
Some caveats before I get too far in, the environments I’ve been thinking about and working within are roughly hundreds to low thousands of engineers, and are companies that have adopted at least some aspects of the Silicon Valley playbook. I don’t imagine these notes would apply to large companies or to extremely non-SV companies.
## Measure twice, cut once
When folks want to invest into Diversity, Equity and Inclusion, the first reaction is often to push adoption of a handful of common practices like the Rooney Rule, unconscious bias training, and so on. However, it’s important to recognize that many of these approaches require nuanced and skilled application to yield positive results.
For example:
- Rooney Rule doesn’t increase chances of a women or minority candidate being hired if they is only one such candidate. It’s only by having two such candidates in a search that their chances improve. (Here’s another good HBR article on the Rooney Rule.)
- Unconscious bias training may increase reliance on stereotypes rather reduce it, depending on how it’s run.
It’s easy to conflate having a strong sense of personal justice with understanding how to create an equitable environment for others, but in practice they’re not very interchangeable. You simply cannot succeed if you privilege conviction over research.
Conversely, while it’s easy to leap past measurement to cutting, you also can’t indefinitely delay cutting. If you get comfortable with the mindset of gathering information until you’re totally confident, you’ll never do anything. Part of progress is accepting you’ll make some mistakes and being prepared to learn from them.
## Metrics-first doesn’t work
Many companies take a metrics-first strategy towards DE&I, and one of the advantages to this approach is that many of the metrics you’ll want to measure are fairly clear: retention, representation at senior levels, compensation, promotion rates, performance scores, and organizational composition. For each of these metrics you’d need to cut the data across a number of intersectional slices to understand the full story.
However, the companies I know that seem most successful at building and maintaining inclusive and diverse teams don’t take a metrics-first approach to diversity, equity or inclusion. I’d state this even more strongly: in my experience, metrics-first approaches to DE&I in practice lead to managerial box-checking and tokenization of folks hired rather than genuine improvement.
Why doesn’t the metrics-first approach work?
Because what you need to create an inclusive, equitable and diverse company is a strategy not just a goal, and metrics-first approaches defer the strategy across numerous leaders which typically results in a disorganized bedlam. Some folks will do great work, some will really struggle, but either way it’ll lack the impact of a cohesive approach. Worse yet,distributed strategies created by the metrics-first approach are particularly prone to creating Values Oases throughout the organization that are inadvertently harmful to the very folks they try to support.
If you’re trying to advance DE&I with metrics as your strategy, as opposed to using metrics to measure your strategy, rethink it. Identify your actual strategy. Only then should you use the metrics to evolve your strategy.
As another caveat, emphasizing these sorts of metrics in a small company often leads to obsessing over changes that aren’t statistically significant, a point which Julia Evans has called me on a number of times over the years.
## Hiring role models
A frequent criticism of DE&I efforts is that they often increase organizational diversity but do so by increasing representation in early career roles without increasing representation in senior roles. When I noticed this pattern manifesting in a DE&I effort I was involved in, I decided to refocus my personal efforts on hiring staff-plus engineers as the highest leverage contribution I could make.
This approach was grounded on the belief that a more representative staff-plus engineering cohort would:
- Improve retention and upwards mobility of our existing team to the extent that every member of the team could identify themselves in a senior role model within our organization.
- Bring missing perspective into our decision-making processes.
- Reduce the likelihood that folks pattern-matched on race or gender as a signal of seniority.
- Bring their referral network to the company to the extent they felt well-supported.
This is, clearly, a critical place to focus, but what I underestimated is the time frame of implementing this approach. I had imagined that BIPOC and women staff-plus engineer candidates approached their job searches somewhat similarly to how I approached my own, which was a flawed assumption.
When I think about a new role, I get to think about the upside and opportunity in that role. Whereas this cohort has to spend at least as much time understanding their exposure to risk in the new role and whether they’ll get the support they’ll need to succeed. I thought hiring role models was an initiative which might take six to twelve months to show results, it later became clear to me that this sort of project requires years of building relationships and establishing yourself as a “safe pair of hands” to support folks’ careers.
To establish yourself as such a safe pair of hands is considerably complex! It requires positive whisper-net feedback, it requires having the right opportunity within your company to offer candidates, and finally it also requires that you are sufficiently successful within (and dedicated to remaining at) your company that you can sponsor the you hire on an ongoing basis.
Doing this well is literally a career’s worth of work . This isn’t just hypothetical, once you start looking you’ll notice that there are a small number of folks out there in the industry who have genuinely made this a cornerstone of their career’s work, which is a bit mind blowing to contemplate.
## Predictable is better than ambitious
Last week I got to join a community call hosted by Black Tech for Black Lives, and one of the speakers spoke about the outrage after the police beating of Rodney King in ‘91, and how outrage doesn’t necessarily lead to change (their remarks were not made in public so omitting the speaker’s identity). On that theme, Dr. Erin L Thomas has a great thread on how you can create change in these moments, “As you plan next steps, please resist the temptation to commit to all the things youcould do. Now is the time to FOCUS.”
In these moments of intense energy, it’s easy to think about what the company can do today to improve, but it’s important to throttle change to what the company will commit to sustaining later even if their attention wanes as other urgent problems emerge over time. Whiplash of policy and investment harms the folks they’re intended to help, so it’s better to do something more modest but truly do it than to overreach, get folks bought into participating, and then squander their efforts.
## You can’t be tired (or entitled)
On the topic of long-term predictability, Marco Rogers had a tweet some time ago which I can’t quite seem to find, but spoke to the idea that leaders and ally’s who are already tired of DE&I work are likely to cause more harm than help. His tweet periodically echoes unsummoned in my head since I read it, because it invited me to be more honest with myself about my motivations for participating in this work.
In particular, it helped me recognize that most of my early work on I&D was motivated by the desire to hit targets as a high-performer, and then later by a desire to be appreciated as an ally. My efforts were not motivated by a genuine desire to improve things, which greatly limited their impact.
After seeing this behavior in myself, I became better attuned to seeing it in others, and in particular seeing folks with a self-image as an advocate or martyr for the cause but who ensure work only succeeds to the extent it recognizes their efforts. This is a complex phenomenon to think about, because their specific work is typically positive on the margin, but comes with strings attached.
First, the “good deeds” come bundled with an ongoing emotional maintenance cost for the recipient. In some cases, this emotional maintenance cost will quickly surpass the benefit from the initial deed.
Second, the recipients of this kind of work are tokenized by it both in the eyes of the helper (“I got this person the role they deserve”) and in the bystanding eyes surrounding them (“so and so got the role because that person got it for them”). Their success becomes viewed as the accomplishment of their sponsor rather than their own.
Third, this kind of work often saturates the space for others who might do less ego-driven work to improve things, causing them to disengage.
Having seen those challenges over the past few years, I can summarize my current thinking on what it takes to make genuine progress on this sort of work:
- Don’t tokenize others. Be careful you’re really setting the person you’re helping up for success, not a step forward into a target, a tarpit or a glass cliff. If you aren’t careful, it’s easy for an intended act of sponsorship to corrode into an act of tokenization.
- Don’t center yourself. You have to do it in ways that don’t single out instances or individuals, for example the approach described in Work the policy, not the exceptions. It’s an interesting balance, to be vocal about the broader problem, and then invisible on the specific instances and individuals. You still need to do that work, but recognize that if folks notice you doing the particulars of the work that you’re probably undermining your own efforts.
- Don’t be comfortable. You have to be focused on what actually works, not what you feel like should work – results over repetition. You can’t keep doing what you’re comfortable with if it doesn’t show results, and you can’t do nothing because you’re uncomfortable with everything that does show results.
There are a bunch of other approaches and learnings to think about, but these are the ones that have been most important to me recently.
## Level playing fields
A lot of my early views on equity and inclusion were rooted in my experience growing up, where I often felt my social and academic struggles were due to what I perceived as disadvantages relative to my peers. Starting from that perspective, it was obvious to me that the defining characteristic of an effective system was one that made me equally advantaged as my most advantaged peers. I looked at any system where I wasn’t structurally equivalent to the most advantaged person as an unjust system.
There is the popular concept of a “level playing field”, which is one where only skill differentiates participants from one another, but as you dig into this concept I find it’s mostly a construct to help folks ignore their own privilege. In reality, there are no level playing fields.
As a personal example, writing has advanced my career tremendously over the past decade. Anyone can write! It just takes a few minutes and a free website. Truly a level playing field! That said, it takes a lot of time to write. If I’d had children earlier and wanted to be an active coparent with my spouse, I would have been less able to write. If I had less economic stability, I would have instead prioritized income over leisurely writing. If I had elderly or sick parents, I would have instead prioritized their care. Anyone can write, yeah, but it’s hard to call the field level unless you’re trying to convince yourself of something.
Similarly, I’ve had a number of folks ask for my story of becoming “successful” in technology, and my advice for how they can recreate it. Some of these folks are supporting their families driving Uber and working to learn software development on the side. These folks are doing something extraordinarily hard, and I stare at those emails and ask myself what advice can I possibly give them? There is some tactical advice I can give which might be useful (resume tips, etc), but the best advice I’ve found so far is that they shouldn’t ask people like me for career advice because anything I tell them is more likely to be harmful than helpful.
With all that in mind, I think we have to be more comfortable with recognizing that when we design interview, calibration or promotion systems to be “level playing fields”, it’s most likely that we’re only leveling them for ourselves by virtue of the areas where our eyes are naturally drawn. The field’s gradients in our peripheral vision will remain askew until we look there just as deliberately as we look at the sections that impact us.
I haven’t come up with a generalized recommendation for this beyond deliberately looking, and getting comfortable engaging with what you find. Part of this is certainly a shift from “fair by means of consistency” to “fair by means of accommodation.” Thinking about the interviewing example, try to shift from evaluating everyone in an identical process (“everyone takes the same five interviews”) to evaluating folks at their best (“you can do a take home if you prefer, or in person interviews if you prefer, or…”), and let candidates make their own informed decision on how they want to be evaluated.
Ok, this is far from everything, but it’s a good summary of some of what’s been top of mind for me over the past couple weeks in this area. I also don’t mean to imply that any of this is innovative or new to other folks, just documenting my learnings along the way.
That's all for now! Hope to hear your thoughts on Twitter at @lethain!
|