Jun 24 2016

That Time I Joined Twilio

Twilio is one of the few companies aligned with my core values as a technologist: the importance of the developer and developer experience, the value of design and design thinking, and the significance of powerful, accessible "plumbing". There is no other publicly traded company like it.

With yesterday's IPO, it's hard not to think about Twilio and my time there. I think the most interesting part of my Twilio story is how I ended up working there. Among all the early employees, I had a pretty unique experience and it had a lot to do with how I joined. It was sort of a fluke.

I've talked a bit about my time on the Twilio platform team. That experience in particular sort of changed my life. It laid the blueprint for everything I've done since. However, the first half of my time at Twilio was building a product that never shipped called Twilio Realtime. Not many people know this story.

I joined Twilio in April 2010 and stayed for two years. Even today that's the longest I've worked at one place that wasn't working for myself. I learned so much at Twilio. The company, people, and values there had a huge impact on me. It's also where I validated a hypothesis: I cannot work for any company as a full-time employee.

Twilio was my first full-time job in the industry. That's not to say I was new to the industry, I'd just always been an independent contractor before then. After working independently for 10 years and spending most of my time writing open source projects, it was a big deal for me to take a job like that.

Just before Twilio, I had been working with some friends at NASA Ames. It was originally to prototype a web app to enable participation with the TESS mission. Actually this was pretty cool. It was going to allow people to submit code that would run on the NASA satellite to do arbitrary analysis on extremely high resolution photographs of space, data that was too big to send back to earth to process there.

Our team of young space geeks eventually evolved into a larger project called NASA Nebula. Among other outcomes, the software developed there became Nova, the heart of OpenStack. However, this hadn't happened yet. I worked on what led up to this for a year but the politics and particular technical focus at that point had started losing my interest, so we parted.

Despite that, I was feeling pretty great. I had worked at NASA! How cool is that. Plus, SuperHappyDevHouse, a regular free event I started with friends a few years before, was peaking with around 300 attendees each event. And in the fall of 2009, we opened the doors to Hacker Dojo in Mountain View, today the largest hackerspace in the country. I was also actively evangelizing an idea called webhooks, and it was working. Because of that, people were inviting me to speak at conferences, and I even gave a Tech Talk at Google.

To a 24 year old that grew up poor, didn't go to college, and never had a full-time job, all this felt pretty good. That said, please excuse this tweet of mine from January 2010:

Beginning February I will be available for hire. Keep in mind, I'm only considering my dream job. Think you have it?

This is how I ended up being employee 8 (9? Still unclear) at Twilio. Jeff Lawson replied to this tweet with something like, "We should talk."

Why Twilio? There were a number of reasons.

The first that stood out to me was that the core mechanism of Twilio was based on webhooks. At that time, anybody that was implementing webhooks was immediately endearing to me. However, they went further and were using them in a way that showed they really got it.

Also, Twilio wasn't unknown to me before. I was already using it for various projects and demos. I thought the platform was amazing. I mean just read how it was pitched to Fred Wilson.

I'd been fascinated with telephony long before Twilio was conceived. As a kid I loved hooking up and splitting phone lines, and generally playing with the phone. I loved the stories of Woz and Jobs messing with the phone system. It was appealing to me for the same reason it was to them. Before the Internet it was basically the only modern worldwide system, and to tap into it felt immensely powerful.

Most telephony software was built on Asterisk. I never worked with Asterisk, but around 2003 my dad started an ill-fated company called Fonestream built on it. The product was based on the premise of combining audio streaming on the web with telephony. It wasn't able to find a sustainable market, perhaps it was ahead of its time, or maybe just too niche. I remember later bragging to him I could rebuild Fonestream in an evening using Twilio.

Fast forward to 2009, just before discovering Twilio, a friend and I were inspired by some talks explaining how much of the third world was connected via 2G phones and SMS. In some cases, villages would have the village 2G phone that would let them talk to distant family or make payments via SMS. These talks made the case for building SMS apps to enable people to coordinate and collaborate in ways that would let them uniquely solve their own real world problems.

As an example I recall, in some places people would walk long distances in the heat to get to a market. Sometimes they'd also have to take a train as part of that trip. If the train wasn't running for whatever reason, they'd walk all the way to the train only to have to turn back. Making it easy to know the train wasn't running before they left would save them that pointless trip.

After seeing these, my friend Adam Smith and I talked about making an SMS wiki. Wikis were hotter back then and really showed how a simple tool can be used for so many collaborative purposes. We didn't have Twilio yet, but we did find a way to tap into Google Voice's SMS capabilities to essentially get the same SMS API for free, though against terms of service. It worked so well, we generalized it into framework for building SMS apps. Neither were used again, but both are still on my Github: ShortWiki and ShortNet.

Shortly after, we discovered Twilio.

The coolest use of Twilio I'd seen was made by Adam and other friends completely for fun. It was a game hacked together a few hours before my 24th birthday party, an epic joint party at Hacker Dojo just after it opened. The game was projected onto a huge wall. It was essentially Missile Command, except you're saving me from the missiles. There was a grid with numbers in each cell and a number to call. Anybody at the party could try to stop the missiles from destroying me by calling in and entering a number from the grid.

Jeff Game

All of this was in the back of my mind when I met with Jeff Lawson at Red Rock Coffee. I liked that he was a technical CEO. We chatted about webhooks, NASA, and PHP. Then Twisted Python came up. He said, "You have to talk to Evan, our CTO." Apparently Evan was all about Twisted. Half of Twilio was written in it.

Twisted was the Node.js of the 2000s, except nowhere near as popular. But it was the first to really popularize the single-threaded evented programming model for high performance networking applications.

My next visit was at Twilio HQ, then at San Francisco's Pier 38 just before they moved out. Jeff Lawson took me to brunch. We talked a lot about the power of APIs and how everything should have a web API. We even jammed around the idea of Twilio someday providing an API to manage and move around money that people could build banks on top of.

Oh how naive we were, as I'm sure the founders of Simple, Dwolla, and others would agree. Plus certainly out of the scope of communications, but there was definitely a shared sense that what Twilio was doing for telecom could, should, and inevitably would be done for many other industries and systems.

We went back to Pier 38 and into a conference room where I met their CTO, Evan Cooke. Here the three of us talked about working together.

Remember, I said I was looking for my dream job and I meant it. Can you guess what that entails? If you know me it wouldn't surprise you: A place where I can work on whatever I want. Sure, who wouldn't want that? But I was ready to walk away if they said no.

I wanted a place where I could do my life's work. Not a place that would give me my life's work. A place I could do my life's work. A place I could make a home out of, stop contracting, and focus on all the projects I consider to be important. In theory, Twilio and I seemed aligned enough it could work.

Their response was fairly predictable. They said, "Well what do you want to work on first?"

At that time, regardless of my job situation, there was a project I was planning to build as open source and ideally run as a free service. It was a service that would handle real-time communication to the browser via REST API. Today, especially with Node.js and WebSockets this isn't anything special. Back then, though, neither of those existed.

The long-lived connections necessary for this were something a lot of web developers were actually afraid of. "It doesn't scale!" "They're too expensive!" A lot of very talented, prominent developers had this opinion and said it wouldn't work. They were sort of right, but also not entirely.

Back in 2004 when Gmail was released, it did something almost nobody was doing then. It used a browser feature called XMLHttpRequest or XHR. It had been functional in Mozilla since 2002. By 2004 it was available in the top major browsers. It allowed pages to load content and make requests after a page was loaded. Updating without loading a new page. The technique was dubbed Ajax and became quite trendy to make faster, less jarring dynamic web experiences.

One night not long after Gmail was released, 7 years before WebSockets, Adam Smith and I were working on a web project and got distracted by an idea. Inspired by Gmail, we wondered if we could use Ajax/XHR to make a real-time game in the browser. Most web apps that wanted to push updates to a page would have the browser poll the server using XHR. That wasn't real-time. That wouldn't support, say, a multiplayer action game.

Armed with Apache, PHP, Firefox, and no serious knowledge of concurrent programming, over the next two days straight we did it. We ended up implementing long-polling, which was not done enough to have a consistent name yet. The XHR request would stay open until it got a message, then reconnect immediately and stay open until the next. Apache and PHP made it really difficult to share data between requests, but we found PHP functions for POSIX IPC functions, which let us create a file both request threads could see and effectively send messages on. In retrospect, a Unix domain socket would have been a better idea, though it's possible PHP didn't have an API for them yet.

Nearly a year after making it, we showed the proof-of-concept game we called AjaxWar off at SuperHappyDevHouse to much acclaim. We won best talk. A poorly lit video of the talk is still on YouTube. In the audience was Alex Russell who later coined Comet to describe techniques like this to achieve real-time streaming in the browser.

Even with Comet popular enough to be given a name, it was not done that often. It was extremely hard to implement reliably in a consistent way across browsers and versions of browsers, let alone at scale. There were a handful of intricate tricks, all of which you'd implement then detect which one to use based on the browser. But it wasn't just client-side. Each trick required a different backend implementation, and none of them could be done with the common web stacks of the time.

I wanted to solve this problem with a simple web API. There was basically one open source project that people could use for this sort of thing, but it was built on technology foreign to most web developers and would be another fragile component to run in the stack. I felt this was unnecessary and inaccessible to most developers.

I almost considered doing it as a startup myself, but there I was with Evan and Jeff and they asked what I wanted to do. I told them real-time browser communication as a service.

Evan's eyes lit up. He showed me a demo he built using Twisted. It was a web page that visualized every event going through Twilio in real-time. It would seem he was excited by the idea. Jeff also seemed excited to expand the product line to a new developer tool. It wasn't telecom, but it was communications.

I was hired and I had my first project. It had almost nothing to do with the existing Twilio product.

Twilio Realtime was on its way, but I was doing it all alone. Despite a growing engineering team around me. They were tackling problems with scaling, billing, important features, responding to incidents, and everything related to building and running Twilio. I wasn't. Sometimes I'd get involved or provide feedback if asked, but I just wasn't responsible for those things.

I've heard the most common story told about me at Twilio today is this: Somebody interviewing for a job asks, "What's it like working with Jeff Lindsay?" The interviewer, an engineer I know with a great sense of humor, replies, "It's sort of like not working with Jeff Lindsay."

We all worked at the same company, but I didn't actually work with anybody for most of that first year. This wasn't intentional, but that's what happens when you join a company to work on something other than the company's product. This eventually changed, but I certainly gained the reputation of loner.

Despite that, Twilio Realtime was coming along. We had a private preview and people loved it. However, other startups were popping up providing a similar product. I recall Pusher and later PubNub. I got a lot of feedback that Twilio Realtime was easier to use. That was encouraging. Plus I figured, "We're Twilio, we can blow those other guys out of the water."

Unfortunately, it never felt like Twilio Realtime was getting close to launch. Evan had imposed some intense scaling requirements for launch, on the order of millions of subscribers. This whole period was a blur. It was frustrating because a new startup would just ship and deal with growth as it came. At this point, Twilio couldn't afford to do that.

Fast forward a few months, Twilio had gotten even bigger. There were now several product people, but Twilio Realtime didn't have any support from product. It basically slipped through the cracks since it started before a product team existed, and product was now necessary to get anything shipped. After identifying this, Twilio Realtime was assigned a product manager.

We tried to make it work, and it would have been a great product to use, but it seemed ultimately there was not a strong enough business case to launch Twilio Realtime. In retrospect it's obvious from a technology standpoint that the problem it solved was on its way to being dissolved. Instead the technology and team was rolled into Twilio Client, a toolkit for web and mobile to make softphones and other communications applications.

It was good to work with a whole team. It was also a nice break to be part of somebody else's vision. I found interesting problems to get involved with like presence, API design, security tokens, and more messaging stuff. I was also organizing Twilio Tech Talks and bringing in people to talk about useful technology and ideas.

Twilio Client shipped around my one year mark. That's when I joined the fledgling infrastructure team. After another year, I realized as great as Twilio was, as important as the product was, working there was not maximizing my output. It was not the place I could do my work. No company would have been, and I've since gone back to collaborations and finding ways to fund my independent work.

Even if Twilio wasn't the place I could do my life's work, and as obvious as it is now that it wouldn't have worked, I'm grateful that Jeff and Evan took that chance with me. Not just because of this IPO, which might actually help fund more of my work, but because of what I learned, the people I met, and the ideas that were explored there. It was fun to be even a small part of a company that incredibly special.

Congratulations to everybody at Twilio.


Apr 25 2016

Generative Technology

I've felt for a long time that certain technologies were more appealing to me than others. I'm not talking about which industries or fields the technology is applied, but something deeper that impacts all technology. It's the reason why the computer felt so special to many of us. Perhaps why a lot of us got into software. For years I've been searching for a way to describe it and now I can: generativity.

In 2008, Jonathan Zittrain wrote The Future of the Internet and How To Stop It. The book presents the idea of two types of technologies: sterile and generative. It ultimately focuses on policymaking concerns around generativity, in particular the security implications. Cory Doctorow has a good summary review if you'd like to learn more about the actual topic of the book. However, here I'd like to expand on generativity from a broader, technologist point of view.

Zittrain describes sterile technologies as those with a fixed purpose or use. They're made for one thing and don't often surprise you. Another way to think of them are as appliances. Home appliances exemplify sterile technology. The toaster is pretty much only good as a toaster. The smoke alarm is only good as a smoke alarm.

Generative technologies are those with more potential uses than the creator could even imagine. They're general, repurposable, and often surprising in how they're used. They encourage innovation and tend to result in new technology on top of them. One of the most generative technologies we have is computing. The Internet and the web are themselves also generative technologies. At the content layer on top of the web, Wikipedia is a generative technology.

Toaster and Commodore 64

Most technology falls somewhere between sterile and generative. Or perhaps, are a little of both. Zittrain describes certain devices as "tethered." That is, generative, but not purely generative. The iPhone and the App Store are common examples of tethering. They allow anybody to create new uses for the device by making apps, but Apple is the gatekeeper. Truly generative technologies and platforms don't have a single gatekeeper for their ecosystem. Again, our prime examples: the Internet, the web, Wikipedia.

Zittrain tries to remain neutral on whether sterile or generative is better. He argues that your car should likely be a sterile technology. Generative systems can grow messy, and malicious third parties can often abuse them. Truly generative systems can sometimes be compared to the Wild West. Perhaps in fear of that chaos and of what we don't understand, we let companies sterilize and exert control over these technologies.

The book ends up focusing on the debate around this trade-off. Yet it's the idea of generativity that captures my interest most. I'm not against sterile technologies, but I do have a bias towards generativity. I would love to see more generative technology than sterile, but we live in a world that discourages it.

Blame it on consumerism

Business loves sterile technology. It's more predictable and easier to control. Sterile is also easier to market. A product needs to solve a specific problem for people to see a need for it. This is why products tend to have a fixed purpose in marketing if not by design.

A product with several uses is often harder to market until the uses are well-known. Think of baking soda and personal computers.

Twilio is another example of a generative product. We first marketed to developers because they could see all the ways to use it. Later, marketing to large enterprises required a shift in strategy. We had to focus on narrow, problem-specific solutions on top of it. This is often a necessary strategy for API and platform companies.

Besides marketability, generativity often comes with other business challenges. Having untethered uses is a support nightmare. Generative technology can also be confusing and harder to adopt. And again, malicious third parties can hurt users and product reputation. These all have costs that are easier to manage by limiting generativity.

Another way to say it, sterile technology is more consumer-friendly. By keeping it simple and focused, it's easier to use, easier to market, and easier to run a business around it. These all sound great, but from a technology standpoint the trade-off is the possibility space it allows us to explore. A toaster lets us make toast. A computer lets us do almost anything.

Unprecedented generativity

Generativity explains why the development of the personal computer was such a big deal. Computing represents the most generative technology in human history. You can imagine the marketing challenge in front of early PC vendors: a device you could use for almost anything. Most of them focused on known use cases around business, learned from the previous generation of computers.

Steve Jobs had a different idea. He knew how special the computer was, often citing the magic of one mind and one computer. His mission was to make computing personal and on a massive scale. Beyond hobbyists and beyond the workplace.

Steve Jobs and the Macintosh, Norman Seeff

The way he felt he could do this was to make the computer as much into an appliance as possible. He knew the real generativity was in software, so make the hardware sterile. Plug it in, turn it on, then point and click to enlightenment.

But Jobs was a control freak. He tried wherever possible to make the software sterile, or at least tethered, as well. Can you imagine how excited he was to move Apple into consumer electronics? Actual appliances! If you recall, he even tried to get away with making the iPhone a closed, sterile appliance with no third-party apps.

This sterile appliance strategy started with the Apple II. But since the Apple II was Wozniak's baby, it was more generative than sterile. One argument in particular about the design of the Apple II captured their contrasting values.

Woz wanted 8 slots for expansion cards in the Apple II. With more slots, third parties could add more functionality to the device. This would increase its potential uses, making it more generative. Jobs only wanted two slots for two specific add-ons: a printer and a modem. Fixed, predictable, sterile.

The story of Steve Jobs is all the more remarkable considering his dedication to the sterile appliance strategy.

The Apple II was a success, but it would seem as though it was because of the generativity that Woz kept intact. Yet every computer product Jobs orchestrated from then on applied the appliance strategy in full force. It seemed to only result in financial failure after failure. From the Lisa, to the Macintosh, to the Next Computer. You can imagine somebody in his position might have reconsidered this strategy. He persisted.

Finally, with his return to Apple, the timing was right for the iMac. The technology was cheap enough and the PC market was large enough. The strategy finally had a chance. It worked and it would seem Jobs not once doubted the appliance strategy. In fact, he took it to the next level with the iPod, iPhone, and iPad.

But I don't believe computers should be sterile. It doesn't make sense. They're fundamentally generative. And it wasn't just Jobs that pushed them to be more sterilized. Every business in the industry wanted to sell computers to more people. They were all incentivized to dumb down and sterilize computing. Not just the hardware, but where it hurts the most: the software.

The upside is that computing is now everywhere. Perhaps sterilizing computing was necessary to get here. The problem now is that what we have is a stunted form of computing. I call this Pop Computing.

Generativity and hacker culture

The Apple II expansion slot story was as much about Woz as it was about Jobs. It wasn't just telling about Jobs and his appliance strategy, it was also about Woz having a preference for generative technology. This is significant because Woz is a symbol of hacker culture, and I don't believe his bias for generativity is separate from his values as a hacker.

I'm beginning to define a hacker as a person that needs to use and build generative technology. It might not be a conscious drive, but I believe generativity is part of hacker DNA.

Think about it: they prefer using technologies that have more generative properties. Open source, extensible, programmable … "hackable". They often have a distaste for products that are more sterile. For example, Apple products. Most hacks, including life hacks, are a form of repurposing or applying generativity where it wasn't before. Jailbreaking the iPhone is making a sterile device more generative.

Some say hackers are about freedom, citing open source and free software. Perhaps they're about unchaining from control and authority. But there are plenty of hackers that are okay with some control, or okay with closed source. So I don't believe these are the ideas that define hacker culture as a whole.

Being a hacker seems more about the acts of building, tinkering, and repurposing. But what is the motivating force behind these actions?

I think hacker culture is about a bias towards generativity. Using and building more generative technology. Perhaps hackers are just people that intuit the importance of generativity for humanity.

The point of technology and generativity

In What Technology Wants, Kevin Kelly explored the biases and trends of technology across human history. He broadly defines technology as anything useful created by a human mind. This includes hammers and gadgets, but also law and cities. He found that technology helps us trend towards more choices, opportunities, possibilities, and freedoms.

These are all different ways of talking about the same thing. Every new technology brings us the potential for more new technology. Again, this is not just tools, but art and ideas. Now imagine Mozart before the piano. Or van Gogh before cheap oil paints. Or Hitchcock before film. These creators were made in part by their mediums. If their technologies had not been invented or discovered before their time, their voice might not have been heard.

How many kids are out there today whose medium doesn't exist yet? Kids that might not be able realize their full potential. Having an outlet that resonates so strongly that it fills their life with purpose and meaning, or at least a way to express themselves.

Vine, a micro-video platform, has allowed for a new form of celebrity: Vine stars. This is different from being a movie star, or even being a YouTube star. It's an opportunity that didn't exist before. Or think of Amazon Web Services and how many startups, sometimes just one person, were enabled with on-demand compute technology that was previously only available to the biggest tech companies. If these are too trendy, just think of how many opportunities were opened up by cheap automobiles and later the Interstate Highway System.

In contrast, imagine a world without technology. Where we were incapable of making any sort of technology, including hunting tools, fire, or language. We as a species would probably not survive nature.

So it seems in general, the force of technology is a Good Thing. Toolmaking is what makes us human. It helped establish humanity. And from there it allows us to find new ways to meaningfully exist as individuals and thus as a species.

By definition, generative technology creates far more opportunities and possibilities than its sterile counterpart. Generative technology is more aligned with the basic trend of technology than sterile. Perhaps you could say it's a more potent form of technology. I'm not saying there isn't a place for sterile technology. But understanding the value of technology and generativity, it makes sense some of us would reject a world filled primarily with sterile technology.

Generativity is important to me. I think it's important to hackers, creators, and producers at large. I think it's worth fighting for, even if to just achieve a better balance between the two. We should at least have a choice of whether our technology is sterile or generative. But it seems that a dominantly consumer-driven society will unfortunately always be biased towards sterile technology.

Dec 04 2015

Leadership, Guilt, and Pull Requests

I have a lot of open source projects. Even more with Glider Labs. Some of them are fairly popular. All of them get me excited. But most of them also bum me out. I'm going to share one of the reasons I've had to take a break for the past couple months, and why all my repositories are now looking for more maintainers.

Open source is hard. It seems easy, though. You just write a piece of software and put it on Github, right? Well that was the easy part. Now comes maintenance. And very likely politics. Inevitably, guilt. Multiply that by the number of open source projects you have and their popularity. End result: open source can be a bummer.

Jacob Thornton (@fat), co-author of Bootstrap, gave a talk a few years back echoing the sentiment of many open source authors and maintainers. He calls it Cute Puppy Syndrome. It's not the best analogy, but it gets the point across. Open source projects, like puppies, are great fun when they start. As they get older and more mature, responsibility seems to outweigh their cuteness. One solution is to put your old dog up for adoption and get a new puppy. As you can tell from his delivery, this analogy is intended to be humorous:

He mentions that many authors of popular open source projects have gotten burnt out and look for an exit. Often handing projects off to maintainers, sometimes never to return. Not to avoid responsibility, but to stay sane. Still, much of the time, that sense of responsibility lingers. As Jacob expands on the puppy analogy:

If you have your puppy and it turns into a dog, you put it up for adoption, you give it to a maintainer. And then he over feeds it and it becomes fat and bloated. And you just sit there and you're really sad because you don't really have time to take care of your puppy any more, but you don't want to see it fat and bloated. So you're just real sad all the time.

Alternatively, you can let issues and PRs pile up. Guilt and sadness either way. At least opening the project up lets it survive and continue to provide value to a larger audience. You just have to let go of the project as it will now evolve in ways you might not agree with.

When I did this with Dokku, the new maintainers did a great job at keeping the project and community healthy. I can't thank them enough for that. I had to let go quite a bit, but the project would probably be dead without them.

In fact, there's something interesting about maintainers that didn't author the project. It's probably different from person to person and project to project, but the maintainers of Dokku don't have the guilt or burden that I do. They're happy to help, and as volunteers don't feel like they owe anybody anything. It's really the ideal situation. Perhaps authors shouldn't be maintainers after a certain point.

That said, even with these great maintainers, Dokku really only kept on an incremental path of maintaining the status quo. That's not necessarily a bad thing, but it meant Dokku wasn't able to develop further in the directions I had originally intended. I thought to myself, well eventually I'll find time to do a system-wide refactoring to get it on this path I want and submit these as PRs like any other contributor. That time never came, and the project continued to fall behind from the evolving larger vision. The project I started was not living up to my own expectations for it.

Sadness. Guilt.

Then I did something different. It was so simple. I wrote a wiki page describing what I wanted and why I wanted it. For some reason it came as a surprise to me that the maintainers started moving the project in that direction! Did it happen exactly how I'd do it? Not always. But it still brought the project closer to what I wrote down.

This shouldn't have come as a surprise. In essence, this is leadership. There are different forms of leadership, but at the core is the idea of "saying what, not how". It can be very hard for programmers to get into this mindset because our medium is all about the how. Stepping back and writing what you want with flexibility towards how it's implemented takes practice.

This experiment with Dokku was far from perfect. In fact, that document is still incomplete. Project leadership is just as ongoing as maintenance. However, it's something worth getting better at. It's essential to authoring many open source projects and remaining happy enough to keep going. In the case of my projects, since there is always a bigger picture they fit into, it's even more important.

Dokku is just one of many projects, but Dokku is one of my only projects that I'm not an active maintainer. Dokku isn't why I had to take a break, it was all the others.

Some of you might have seen my ramblings about Megalith. Some of you might even be able to follow them enough to see that most of my open source projects are all basically part of Megalith. Or that Megalith is basically all my projects. You can probably see how this leadership is critical to sustain all these projects while keeping them moving in roughly the same direction.

I don't write open source software to make money. In fact, even solving a particular problem is secondary to working towards a vision of how the world should be. Since that's really what's important to me, I should be spending my time on being an effective leader. At the very least, documenting what I want, the direction, why it's important, what design principles are involved, preferred architectural patterns, and so on. Then helping people understand, integrating their feedback, and letting go of a lot of the details.

To support this, I need to open up our projects to more maintainers. Going forward, I'll be trying a variation of the Pull Request Hack to get more people involved across all projects. If you submit a solid substantial PR or several solid minor PRs to any Glider Labs project, you'll be invited to have commit access across all projects.

Starting now, all public projects under my username or Glider Labs have an open call for maintainers. If you'd like to volunteer to help maintain any of these projects, just join our Slack and in #intros say you're interested in becoming a maintainer.

From there I'll do my best to provide guidance and leadership. Together we'll keep making great things!