Leanpub Podcast Interview #35: Janelle Klein

by Len Epp

published Aug 06, 2016

JAnelle Klein

Janelle Klein is the Austin-based author of Idea Flow: How to Measure the PAIN in Software Development. She is the CTO of the recruitment consultancy New Iron and founder of Open Mastery, a network for peer mentorship based on data-driven mastery of software development.

In this interview, Leanpub co-founder Len Epp talks with Janelle about her career, the ideas and experiences that are the inspiration for “Idea Flow”, the concept of “Idea Flow” itself and how it can transform the relationship between developers and management to profoundly improve productivity and quality, and at the very end, Janelle also talks about her experience self-publishing on Leanpub.

This interview was recorded on May 27, 2016.

The full audio for the interview is here. You can subscribe to this podcast in iTunes or add the following podcast URL directly: http://leanpub.com/podcast.xml.

This interview has been edited for conciseness and clarity.

Janelle Klein

Len: Hi, I’m Len Epp from Leanpub, and in this Leanpub Podcast, I’ll be interviewing Janelle Klein. Janelle is an Austin, Texas-based NFJS tour speaker and technical mentor. She’s the founder of the Software Mastery Circle [which is now branded as “Open Mastery” - eds.], which is dedicated to data driven software mastery and aligning the interests of business and software engineering. Janelle is the CTO of the recruitment consultancy, New Iron, that specializes in building software teams.

Idea Flow: How to Measure the PAIN in Software Development by Janelle Klein

Janelle is also the author of the Leanpub book, Idea Flow: How to Measure the PAIN in Software Development. The book is about the hugely important topic of technical risk management and presents, in Janelle’s words, “A modern strategy for systemically optimizing software productivity with a data driven feedback loop by measuring the pain, or friction, in developer experience. People can identify the biggest problems, understand the causes, and run experiments to systematically learn what works.”

In this interview, we’re going to talk about Janelle’s professional interests, her book, her experiences using Leanpub at the end of the interview - and ways we can improve Leanpub for her and other authors. So thank you, Janelle, for being on a Leanpub Podcast.

Janelle: Thank you for having me.

Len: I usually like to start these interviews by asking people for their origin story, and I was wondering if you could tell us a little bit about how you first became interested in Agile and software development.

Janelle: Sure. I’ll start from college - well, slightly before college. When I graduated high school, I ran off and married my high school sweetheart. Not the best decision I ever made. But that got me running off to California to be a professional song writer and following my dreams.

And then I got into college, and I started to realize what a career in music might actually be like. And it just sort of sucked all the passion out of me, thinking about having to write to make money. And I knew absolutely nothing about software development. We had a computer growing up that I played lots of games on. So I’ve always been a huge, avid gamer.

And my ex-husband at the time - he was in the military doing network management, computer-y stuff, kind of a hardware geek - he’s like, “Let’s take this assembly programming class. That’ll be so cool. Assembly’s like awesome. It’s all this low level cool shit, right?” And so I was like, “Okay. That sounds like fun.” So we went and took this assembly programming class, and I was like, “Programming my calculator in math class, I can figure this out.”

I got basic instructions, and I started writing reams and reams of assembly code. I wrote the game, Breakout, with the paddles, and all that, completely in Assembly, with the full 256 colors and little music and beeps when the thing hit the wall - just completely in Assembly. And my teacher was like, “Well, at this point, just show me what you’re working on. You can work on whatever you want, and you get an A.” And I’m like, “That’s cool. I should take more classes like this.”

But that moment really changed things for me, because it was this moment of unlimited exploration - where I can go and create anything I could dream of. It was this ultimate kind of artistic medium, and that’s when I fell in love with software development. I’ve had one experience after another, that’s been this giant open-ended problem of following whatever dreams that you might have, and turning ideas into awesome tools. As I’ve learned how to do that with other people, it’s been one of the coolest experiences imaginable. And really, software development is very much a love of my life.

Len: That’s a fantastic story. When I’m interviewing Leanpub authors, many of them are in software development, and it’s curious - I think at this point, it’s now less than half of the people I’ve interviewed who end up in software, took something like computer science in university. People come into software development from so many different directions. I was wondering how you ended up at New Iron doing consulting work?

Janelle: So I ended at this kind of strange company, New Iron, doing - I mentioned it’s a software niche recruiting company, or you mentioned that in my intro - I started working for New Iron just as a contractor on this project at a semiconductor company. I got really involved in semiconductor, and lean manufacturing and supply team optimization and process control. And doing back end, high volume data automation stuff, and problem-space wise, it was really cool.

But as I started to learn more about lean - because the company was all about everything lean - I started to get involved in their lean consulting program, and teaching lean practices around the company, but in the context of software development. After figuring out how to solve the problems on my own project, and just getting better at software development, and how to figure out what your problems are, and turn around a failing ship, I got really good at helping other people to solve those problems.

And so it was never really that complicated in consulting. The main thing I discovered is that the job of a consultant is essentially to come in and identify the elephants and point to them. Usually, if you just listen to what people are saying, they’re already talking about what all their problems are. A lot of the time, I’ll just echo the same things that the team is already saying, and just explain it to management - but with a nice Keynote and explaining things in management-speak.

And suddenly all these communication problems where engineers couldn’t talk to management became my niche. It was figuring out how to communicate all this pain with metaphors and stuff, so that I can bridge that gap. A lot of the failures of projects are caused by these problems, with this inability to communicate. And so, with consulting, it was kind of a natural niche for helping out with that.

And then with New Iron, since we specialize in technical assessment and developer mentorship, I’ve got to focus on teaching people, doing a bunch of stuff with community, mastering the art of technical assessment, and teaching people how to build more effective teams. I mean, it all kind of went together with consulting.

In school I almost got my PhD, but I decided not to, because I was really eager to get into industry, and I really enjoyed like all the HCI kind of stuff. I always had this love of science and research. And I thought about going back to get my PhD, and then realized, “You know, I don’t need a need a degree to be a scientist, right? If I want to go and do research and learn about problems, I can just set up my own little research lab and learn about problems.”

And so I started treating developers kind of like guinea pigs, and set up a little lab in my work environment, and then started codifying the things I was learning about how to teach the art of software development into patterns and principles using these tools. A lot of that learning created the basis for Idea Flow. So I didn’t scare everybody away. I decided to write my book as kind of a first person narrative story of my own experiences, so it didn’t read like a dissertation.

Len: I’m really looking forward to actually talking to you about your book. It’s really interesting. I just have one question I’d like to ask you before we get to that though, which is about - there’s a line, I think, on New Iron’s website that says, “Being a good developer doesn’t make you a good interviewer.” This is something I’ve been reading more and more about in the tech press lately. It’s become a little bit of a meme, I guess, especially in relationship to people with various kinds of disability. I was wondering how you would recommend people hiring software developers to approach giving interviews?

Janelle: I think the main piece of advice I would give developers, is to focus on testing decision-making abilities - rather than current, active skills. The profession of software development is all about being a professional learner. It’s knowing how to find and look up information - how to break down problems, and how to learn your way out of a broken situation.

And so what I eventually came to is, that we need a method of technical assessment that focuses on assessing a person’s capability to reason about problems and navigate trade-off decisions. As opposed to, “Do you know what TDD is?” or “Can you write a unit test?” Because at the end of the day, it’s not about what you can do right now. It’s about what you can learn when you’re on the job. Because you’re never going to be faced with the same problem and the same challenges.

I mean, this industry changes so fast. We’re professional learners. We’re professional problem solvers. And that’s what makes people good, is their craftsmanship skill, that comes from years of intuition based around problem-solving, is really what skill is. And that’s what we need to be assessing for. So the main thing I do is put real problems in front of people. And then I ask them lots of questions to poke into their reasoning process, as opposed to, “Can you get from A to B?” It’s about, “What kind of options are you exploring?”

Len: Do you recommend doing things like asking riddles to see how people respond to unfamiliar questions?

Janelle: I’m not good with riddles, so it’s hard for me. Yeah, probably not answering riddles. I would hate to be given a riddle in a - I guess the other thing I’d say, is that when people get nervous, their ability to make intuitive leaps to solutions generally shuts down a bit. But their recognition skills don’t. If you put something in front of somebody, and they don’t recognize that there’s a code smell of sorts, or some problem with things, then that’s usually a sign that something is actually wrong or missing.

Whereas, just because somebody can’t come up with a right answer - it’s just the world is not that black and white. And I know technical assessment is really hard. For example, with respect to certifications, we’ve been working on a way to systematize technical assessment, so then I could teach other people how to do my interview technique.

It’s taken us a decade to figure out how to do that. But a lot of people punt on the whole certification problem, because, “It can’t be done, it’s too hard.” There’s so many problems with commercialized certification in our industry, that everybody is like, “Oh well, we shouldn’t even try, because it’s just going to cause so many problems.”

I think we need to move toward more of a model, where it’s open certification. Where we’re testing decision-making skills, and it’s an industry public debate about working on getting better at this, as opposed to just giving up on the problem, because it can’t be done. And just make technical assessment a free standard, almost, that we work on together as a community. Because we all benefit from having a clear definition of what good is. I mean, we need that.

Len: Yeah, definitely. In that context, I’m curious what you think about nervousness in interviews and things like that? Because the interview is a situation that the person will never find themselves in again - more or less, if you hire them. They’re going to then be working. And so people can be nervous in interviews, but not nervous when they’re not doing their work.

Janelle: Sure.

Len: I was once interviewing someone who’d just finished a masters in maths at Cambridge, and failed to do basic arithmetic in response to a question. So I knew what the interviewee was capable in normal circumstances, but not in the stressful circumstance of an interview. And so even the best certification or qualification - or something like that - that someone comes into the room with on paper, can sometimes not translate in an interview, but can in work. I was wondering if you advocate giving people problems to take home and solve or something like that as part of the recruitment process?

Janelle: I don’t really do take-home problem solving kinds of stuff. I mean, I certainly see the benefits. But the main problem I have the take-home thing is that it takes too long, it’s too much work. In a market where everybody’s hiring, and there aren’t enough good people, the last thing I want to do is go an create more barriers to entry to hiring the right people.

And so we’ve basically focused on, how can you get the maximum amount of information to make a quality decision in a limited amount of time? Because the take-home thing has to be scheduled around life, it’s much easier to get people to show up for a scheduled one hour call. And then I basically do that with video sharing and stuff, so I can throw code up on the screen and ask people about it. We’re all about optimizing the time spent in that.

And then the other things I do, for the onsite, are a code pairing interview, and then something that’s more design-focused. So I tend to focus on core skills, as opposed to, “Do you know whatever trivia?” Like whether you know ordering of some… I don’t know? There are some things that are just - Computer Science-related kinds of questions, that are so far removed from what we actually use on a day-to-day job - I’m much more interested in people’s everyday problem solving skills. Can you refactor this code? Can you work with the people on your team? Are you an asshole? Those kind of things.

Len: That’s a really great and very practical answer. I don’t know how much of your work actually is recruiting people in Austin, but I can imagine the competition for good developers there must be really high.

Janelle: Yeah. We pretty much only work in Austin. We have some stuff going on in other states from clients that have multiple sites in most locations, but we’ve always been very central here. And then since our company is run by software folks - it’s kind of interesting how we ended up in software development.

New Iron used to be a product company that built a first generation web services stack. We’re out in California, and I thought, “Oh, we’re going to build this awesome product, and everyone will buy it because it’s so awesome.” Obviously that wasn’t a very good business strategy. And so we’re sitting around going broke, basically, and we got a call from an old client that we used to work with, and they’re like, “Hey, can you guys help us on this consulting project on the software?” And we’re like, “Money, no money.” And so, easy choice, right?

And so as we started consulting there, they’re like, “Do you guys know anybody else that can code?” And since we already had a vendor agreement set up, we just started talking and recruiting all our friends. And we did full-on interviews for everybody, not having any idea how the recruiting industry was supposed to work - that nobody does this kind of thing.

And so we started getting really good at technical assessment. And that sort of became our niche. Because it’s hiring people, and figuring out what skills you need, versus what’s available on the market - and having unrealistic expectations about hiring Superman, and that you can instantly take all these things that you’re deeply familiar with and say, “Oh, now you do this job.” There’s so many companies that have so many unrealistic expectations about it. Because hiring isn’t a first priority in a lot of companies, so people don’t necessarily put the time into it that it needs and deserves.

Yet it’s so important and causes so many problems when you hire the wrong people for the wrong things. And so we got a lot more involved in understanding what our clients actually are trying to accomplish as company, as a team, and help them to come up with a realistic strategy that they can actually find a gap they can fill, as well as the gap that they can sell. Beause in this kind of market, if you have a job for, “Oh, this is a Java development position like so many other Java development positions.”

Len: One thing one reads about in the press around recruitment is the way that employers or recruiters will sometimes look into a person’s internet history. And I was curious if that’s something that you recommend employers do?

Janelle: Internet history in terms of, like, what they post online?

Len: Yeah.

Janelle: It’s not something we do in that, generally speaking, we focus on whether the developer has the skills, is able to demonstrate the skills, as opposed to any kind of past history kind of thing. I think since we have a really good process for technical assessment itself, it’s like everything else is almost irrelevant. In that, if you’ve got one year of experience, but you can bring it when it comes down to sitting in front of a computer and solving a problem, you know what? I don’t really care if you’ve only been programming for a year. I’ll just think you’re awesome.

Len: That’s a really fantastic answer, I’m glad to hear that, that sounds very reasonable.

Moving on to the subject of your book - which, by the way, I really liked - you start with a pretty dramatic story about your experience working on a statistical process control system for a semiconductor factory, I think. And you mentioned the elephant in the room before. In that particular experience there was, I guess, a sort of invisible elephant there. I was wondering if you could talk a little bit about that experience and its impact on your approach to complex software engineering projects?

Janelle: Wow. Okay, that’s a big question.

So in terms of the experience, I was on this project with this really great team - really smart people. We had test automation, CI, great team, disciplined practices. And despite doing all the things that we’re supposed to do, our project basically crashed and burned and hit a wall. And by hit a wall, I mean we brought down production three times in a row, and then didn’t ship again for another year.

And the third time this happened - this is a 24/7 operation, so downtime is a really big deal. And the last release, the third time we brought down production. Not only did production go down, the rollback failed. And then the feature toggle that disabled the broken stuff failed. And so we were up at like three in the morning hacking out a patch fix to, just so we could get the software out of production while everything was down. It was the most miserable experience you can imagine.

And this was shortly after I’d been out of college. This was my second job out of college. I was kind of a little hotshot, and had my head in the clouds, and thought I was awesome and that I was pretty much infallible. And I had this great idea that would improve performance to basically flip the architecture inside out and do this massive set of changes. And, of course, I put that all in production all at once.

It’s just one of those huge lessons in your career that you never forget. And that really fundamentally changed the trajectory of my life. Especially since all these things, I was like a super TDD zealot-type at the time. And so I was convinced that our success was all about whether we were following the right practices, and whether we were doing disciplined TDD in our project. That was the thing that mattered.

And when I was doing all this great stuff, and I failed anyway. It was the most foundation-shattering experience that I could’ve ever imagined. And so we got together as a team, and we’re like, “What the hell are we going to do?” Because we’ve been doing everything right, but we’re failing anyway. And so we built this tool that could detect high risk changes in the code, because we thought the problem was technical debt building up over time. Same problems we always talk about are causing the pain.

We thought we could use this tool to let us know where we needed to do extra testing, by identifying the areas where there are changes by tech debt code. But what we found was that most of the mistakes were actually in the most well-written parts of the system. And so we started digging around in the data, trying to make sense of what we found. But there weren’t any answers that made sense and correlated with what we knew about software development, and what we’re supposed to be optimizing for.

The correlation we did find in the data was that a lower familiarity with the code tended to increase the likelihood of mistakes. And while that made some sense, it seemed like there had more to the story than that. Because when we had to work with complex code, it was really painful. And so we started down this path of trying to understand, “Well, what it is that makes development feel painful?” And so as we are working, we started collecting all this data on where we were experiencing pain, and then talking about the specific things that were causing the pain, and keeping measurements about how long all these various things took, and just writing up the things that seemed like our biggest problems on a whiteboard.

It wasn’t until we started down this road of collecting data, that we realized that we’d been essentially solving all the wrong problems. We had all this test automation, but the test didn’t catch our bugs. And we had well modularized code, but it was still really difficult to troubleshoot all the defects. And then we started looking at the specific things that were causing our pain, and learning with the data-driven feedback loop, we realized that most of the problems were caused by human factors that were, how a human interacts with the code while they write.

And so that’s where this whole technique of measuring development experience came from as a means of feedback. Because it was the only thing that seemed to really matter at the end of the day was whether it was easier or harder to do our jobs. And if we measured that, data became this unifying force on our team where it really brought us together in learning how to learn together.

And at the time I had this experience, I couldn’t put what I’d learned into words. But I knew that I wanted to teach it to other people. And so after I read Peter Senge’s book, The Fifth Discipline, about how to build a learning organization, I was so inspired by these ideas that I was like, “Alright, I need to figure out how to write this down.” Because this is the essence of what Peter Senge is talking about in his book – these five disciplines of a learning organization, and learning how to learn together.

Essentially, I found this really tight correlation between science and learning organizations. And then when we start using this scientific method-ish process to learn and examine the data and have a shared direction of better, that we’re all reaching towards, that suddenly it unifies a team in a really cool way. And I’ve been fortunate to have some really incredible experiences working with just amazing engineers, where we learned how to learn together as a team.

And so it wasn’t about what practices we were using. It was about whether we could identify the right problems to solve. And once I learned how to identify the problems, the rest of it was easy. I got a team of professional problem solvers. So that turned into the skill that I wanted to figure out how to teach. Even though it was hard - even though we didn’t have words in our vocabulary to describe the things we’d learned - I was determined to figure out how to do that, and to teach these things to other people.

Len: And when you talk about the data gathering, I know from reading your book that there’s very specific things around that. I was wondering if you could explain a little bit about what the data is, and how the gathering takes place.

Janelle: Sure. So I’m measuring, specifically, troubleshooting, learning, and rework. And we have tools that integrate with the developer’s IDE that are partially automated, partially manual. From a time-tracking perspective, it’s a lot different than managers wanting us to track our time in JIRA, because there are tools created by developers for developers to use to understand what their problems are.

So we’ve always kept management out of the data. It’s like, “We’re doing this for us, go away.” We’ve come up with better ways to automate things over time, but essentially - let’s say, when we’re writing code, we go through this cycle of, you write a little code and work out the kinks over and over again. And when you get to that point after you write a little code and you’re validating things, you’ve kind of got a prediction in mind of what you’re expecting the code to do. And at that point, there’s a decision point in our brain. If the code does what we expect it to do, we write a little bit more code. And if the code doesn’t do what we expect, we go into this troubleshooting mode.

And what I found is that, moving forward, modifying more code, learning - basically figuring out what you’re going to do, can take a whole lot of time, especially now that 90% of our software’s built from existing parts. Or, if you’re unfamiliar with the code, learning is a substantial part of our work that gets in our way. And on the troubleshooting side, we’ve got to troubleshoot the problem and then do rework to put things back in place before we can move forward again.

And so this flow of ideas between the developer and the software, and the friction in that flow - I’m measuring in terms of troubleshooting, learning, and rework, and then actual duration of those times. Then, we have a general rule on the team where anything that takes longer than 20 minutes, we talk about, and try and understand what the causes are that made this incident so painful. And it’s always a kind of gut-feel type factor. In some one case, 20 minutes is not much time. In another case it’s way more time than it should be. And so you kind of just talk about the things that are worth talking about. Don’t talk about - it’s just kind of noise.

Len: I find the importance that you place on pain to be really fascinating. I have a sort of specific question I’d like to ask about that. But first I’d like know if you could maybe define a little bit about what you mean when you talk about pain?

Janelle: So in this context, I’m relating pain to friction in idea flow. The reason why I use pain specifically is for a couple of reasons. One is that I want people to focus on the symptoms, as opposed to the cause. Because one of the problems we having our industry is automatically assuming what the cause is of the pain - like it’s this ugly code I have here, as opposed to focusing on the symptoms, and our experience, in working backwards.

And so, I use the word pain specifically to get people thinking about, “Well, what are the symptoms?” And then let’s have a richer conversation about what are the potential causes. Because usually most problems have multiple factors. And I think that’s one of the places that developers tend to get stuck on best practices, as it’s very solution-focused.

The other thing is that intuitively - just from experience - pain is associated with this idea of something that we want to avoid, and figure out how to have less of or reduce it. And from an optimizing, idea flow standpoint - that’s kind of the message I want to relay. There’s this thing that is a threshold thing that we want to keep in check to a level that we can deal with it. It’s not about eliminating pain. It’s about figuring out how to optimize and manage the pain so it’s at a tolerable level, and stays that way.

Friction - even though I’m associating it with pain - it often doesn’t totally correlate with what developers are using that word for now. For example, one of the perceptions that’s usually really far off in software development, is just our notion of what takes a long time. And one of the things that happens is when we’re busy working on a problem and doing stuff, time seems to zoom by really quickly. Whereas when we’re waiting on something to happen, like waiting for something to execute, it feels really painful, because we have to wait all that time for that thing to get done. And time seems to go really slowly.

And so when you start looking at things in terms of elapsed time, we feel like that thing that took five seconds was like an eternity, and that that was really painful. But then we’ve got all this time that we spent troubleshooting, that we don’t necessarily associate with pain - even though it takes an order of magnitude more time, in terms of human cycles. But because we’re busy doing stuff, and problem solving is fun even - doing things that are fun, we seldom associate them with being painful, right? And so I was trying to create this mapping visually, to sort of remap our pain sensors to something that is explicitly defined, that we could all share this universal definition of, What is pain? Being things that slow down the process of idea flow, even though it takes some remapping of our pain sensors.

Len: It’s really interesting to me. One of the reasons why I liked your book so much, was that you talk about addressing pain as a problem, and you were saying just now that pain is something we want to avoid. One of the reasons I find it so compelling is that, that’s so often not true in management practices.

My brother told me a great story where I had a kind of apple-falling-on-Newton’s-head moment. When he walked into a big box store, there were two employees - one of an earlier generation, and one of a later generation. They were handing out flyers or something, wearing little vests. The younger person started walking off, and the older person said, “Where are you going?” And the younger person said, “To the bathroom.” And the older person said, “Make it snappy!” What occurred to me in hearing that story was that pain is something that people often use in managing others - in a sense, positively, right?

Janelle: Interesting.

Len: They use pain as a tool to get the work done. I was thinking about what I’ve read about slavery, and the way there’d be an overseer watching people work, and cracking the whip, and making sure that they stayed active. And I just saw this line from that all the way to this box store, and these two people in vests, and in the way observation, and the infliction of pain is so baked in to various kinds of management practices. We don’t even kind of notice it, it’s so obvious.

I was interviewing someone recently, who said, in his first software job, he was a tester for the Australian government. [Editor’s note: this was actually at a call center, not as a tester for the Australian government. Sorry, Australia!] And whenever he left his station, he had to log why, including going to the bathroom and for how long - the humiliation built in to that.

Anyway, just to finish, the thing I find so interesting about managing software - and especially in a world where software is eating the world, and everything is being driven by software, it’s become this really important form of labor, but it’s invisible.

For example, imagine you’re in that long ancient tradition of the manager who watches, right? It doesn’t matter if you’re watching people sowing seeds or reaping the wheat, or working on a factory floor in an assembly line. You can develop all these practices around watching people. But with programmers, they’re just sitting there, right?

You can imagine being from that tradition, and then you get hired to manage programmers and you’re like, “Well, what’s there for me to do?”

What you’ve done in your book and your ideas is that you’ve like completely flipped it around, and abandoned this form of management - that’s so universal and so inappropriate for a world where software is important. People have attempted in so many ways to try and do metrics and things like that around the wrong things. And what you do, in my opinion anyway is, that you’re looking at exactly the right the thing, which is taking into account like the pain that people feel when they’re doing types of work, and then really thinking that through. So, anyway, I just wanted to say, I think that what you’re doing is actually quite revolutionary.

Janelle: That’s an interesting story and take on the management side of this. One of the other big target audiences I had in mind when writing my book was managers, and helping them to get a feel for what it’s really like to be a developer in this world. I don’t think - if you listen to the way managers talk and the way developers talk, it’s almost like we’re speaking two different languages. Developers use words like “beauty” to describe the code and “elegance” and “firefighting” and “explosions”. And all of our metaphors are in terms of art and war. And if you listen, it’s all about money and profit and bonuses and ROI, and all this money-related stuff.

And I think what we have to realize is that as software organizations, we’re basically building a business that sells profitable artwork. You need to think about the economy as a giant art contest. With software development, it’s very personal in that we very much invest our hearts and souls as software developers into our work. And then seeing your creation get stomped on by all this organizational dysfunction is an emotionally damaging experience.

And so there’s all this cynicism and struggle, and a feeling of helplessness that turns into contempt within our organizations. And we get these divided cultures, where you’ve got a management culture that’s all trying to raise morale through celebration. And there’s all this fake happiness, for a lack of a better word. And then defensiveness around not wanting to shatter the perfect illusion bubble. And so management is often really resistant to talk about problems and pain and things that are going on. Because then it seems like an reflection on them if they’ve got little red marks on their status updates, whereas–

Len: I can’t stop thinking right now, when you’re talking like this, there’s this infamous photo of the Yahoo executive team dressed up in Wizard of Oz outfits, that was being presented to the employees as this sort of like motivational thing. Like the leaders are all on board and sort of superstars, and then the total disconnect in the use of that image, and what’s really going on there.

Janelle: Yeah. I see so many managers that are really concerned about talking about the problems, and that they believe that it will bring down morale if they talk about the problems. But in fact it’s quite the opposite that usually occurs. Especially when you have data to have a conversation about, putting the pain on center stage, as opposed to fighting each other, and blaming other people for the problems - you have a beast to conquer. And then when you go and conquer the beast, then it builds camaraderie within your company, and it brings people together around seeking truth, as opposed to this culture of imagining the world that we wish we had and trying to will that into existence through celebration. Or the after work drinks and bitching about our jobs, and how nothing’ll ever change and how everything’s hopeless, which starts to breed this undercurrent of contempt in our organizations and, emotionally-speaking, people are very much going to war.

I see a lot of these problems in organizations across the industry, where our software problems create cultural problems, and have a huge effect on the lives of human beings. I’m hoping that these ideas catch on from a standpoint of a strategy where we can bring the art side of the world together with the money side of the world, and realize that we can have both.

Granted, it takes a lot of work to figure out how to get there and how to learn together to optimize the whole. But at the end of the day, you build something beautiful. And software organizations become about working together to bring something awesome into the world. And then when you’re competing in the world’s art contest, you make money because you built something awesome. At the end of the day, I think those are the kind of things that human beings really care about - it’s having the opportunity to do something that matters, and to experience that with other people.

Len: It’s sort of central to your vision, your insight, into what people are actually doing when they’re engaged in software development. It flows directly from your basic premise - that things like pain, and things like the human element matter because it’s a human activity. It’s so interesting that we live in a professional world, where talking about the human element immediately requires a little bit of defensiveness or something like that, right? Because that’s supposed to be of secondary importance, and just a problem to avoid.

But your idea flow metaphor puts the human activity - it’s sort of like ontologically at the heart of what is really happening, because, as I understand it, the idea is that - well, I’ll quote you here: “That interaction with the code as a stream of ideas, flowing between me and the software, is a correct description of what’s actually happening in coding. That I have an idea that I try to express it code. And then that code is then used to carry out various things. And you can see whether my idea was translated properly by me into the code, and then properly read by the machine reading the code. And then the next step is when someone else is reading my code, have I correctly flowed my idea from its origin within me into them?”

I just find that so compelling because that actually is what’s happening, that we need take into account. I mean, one can say the human side of things, but just that it’s consciousness that is engaging with the machine through code. And to try take and consciousness out of it, is to completely misunderstand what this process really is. And that’s why it flows naturally that concepts like pain actually become scientific metrics to measure, because that is actually what you’re looking at - is a process of consciousness engaging with something.

Janelle: Yeah. One of the interesting extensions - I mentioned early on, using mentorship, and creating a developer lab to test these ideas - one of the ways I’ve been looking at idea flow as mentorship being, I have an idea in my head about how to optimize productivity, say. And how do I teach that idea to somebody else? That’s another type of idea flow, of sharing that idea. And the thing that I learned from that is that the only real difference between communicating something and teaching something is whether we take responsibility for whether that other person actually understood the idea or not.

And so, likewise, I came up with a method of testing whether a person understood by testing their ability to think through and make trade-off decisions in a variety of staged scenarios. That led to my technical assessment technique, too. This whole underlying concept of “What does it mean for an idea to get from my head into somebody else’s head, and how do we come up with an explicit, sort of scientific, method for testing whether that has actually occurred or not?” - has given me a feedback loop for learning how to become a better mentor as well.

Len: On that note, you talk about communication, which becomes very precisely defined in this kind of circumstance. You talk about the importance of metaphors. I was wondering if you could talk a little bit about why you think metaphors, and good metaphors, are so important for this process.

Janelle: So, I had a couple key influences with that. One of them is a book I mentioned in Idea Flow called Metaphors We Live By, which is a theory on human understanding as a fabric of metaphor, and that essentially everything we understand is in terms of other things we already understand, and that we build up understanding through relationships of metaphors. And so one metaphor, how it relates to another thing through metaphor, and then how those things are similar and different.

The book goes through a whole bunch of a examples in our languages, and comparing cultures, and how when you have a different foundational metaphor, you get a fundamentally different understanding at the cultural level. All of this cultural difference analysis - it’s a fascinating book, and it fundamentally changed the way I started looking at learning and improvement and communication, in that I started to think of my brain like a shape toy, and that I’ve got all these shapes that I know in the world. When I see a square, I sort of shove that into my shape toy brain, if you can imagine that. And in order to be able to recognize something else from my experience, I basically need to define a new shape. And so one of the feedback loops I discovered was that by explicitly defining new vocabulary - like looking at my experiences and then saying, “Oh, this is this pattern” and making up all these new pattern names for types of mistakes or for different strategies or problems in the code or that make troubleshooting take a long time or whatever it is - I basically built a massive taxonomy of patterns.

But in the process of doing this, I realized that I started to recognize things in my experience that I never noticed before, and that there was this feedback loop of, the more patterns I could develop in my vocabulary, the more detailed my observations were from my experience, and the faster I ended up learning. It was like I had amplified what I could learn from each experience, as well as giving myself all these awesome tools and connections for synthesizing information I’d learned.

And there was one metaphor that had a huge impact on my thinking. So Ash Maurya - he’s got a book coming out really soon called Scaling Lean - I went to a workshop of his like three years ago, when he first started working on this book, and he introduced this metaphor he calls “the customer factory”. So Ash focuses on lean startup practitioner-related material.

Essentially what he’d done is codify a method for practicing everyday science, and then mapped lean manufacturing to kind of a customer experience model. So if you imagine your business is like an amusement park, and you’ve got people lining up to the entrance of the amusement park, when they want to use your product and then, their first experience of taking the rides - that kind of thing. And then he came up with this way to measure and model the flow of different experiences through the business, and measure that and optimize what he calls “happy customer flow”.

And what was so fascinating about this, is it was like an experience business - sort of an experience-based supply chain model. And since all the stuff I started doing was in developer experience, as I started looking at the problems, I figured out how to build a supply chain model, mapping the same kind of metaphor idea to software development. But rather than the process flow that we normally think of as the supply chain, I was using the software dependency supply chain with experience-based data on top of it - how you experienced changing a particular software component, for example, and modelling across software dependencies.

And once I figured out how to do that, and optimize idea flow across the idea flow factory as it were - since my background is in process control and supply chain optimization from the manufacturing world - suddenly it was like everything I had learned from lean manufacturing, like clicked together with everything from Lean Startup. That clicked together with everything I’d been doing with idea flow, snd it was like this massive synergy of patterns that suddenly went together, and they were all linked together through metaphor.

Whenever I discover a new metaphorical link, it’s like all of these new discoveries and ways that you can make associations just fundamentally shifts. And I think the major revolutions in our industry always come with some kind of metaphorical shift.

Len: That’s a really fascinating way of putting it. I mean, you used some great metaphors in there as well - like the brain as shape toy interacting with the world. And customer experience as being like an amusement park, and that’s just amazing. I think you sort of proved your point by using those metaphors so well. Because it’s sort of like, when you successfully use a metaphor, it’s like a shortcut to communicating not just a set of facts, but a conceptual structure around viewing facts - and understanding what you’re actually looking at in a practical way.

In your book, you also talk about addressing the root cause of project failure in the software industry, and this is something I think is one of the most important things facing industry, I guess, generally today - is that as everything becomes software-driven, understanding the issues in software development becomes more important. In particular, we all see in the popular press these catastrophic software failures.

One of the ones that first come to mind for most people would probably be - or at least in the West - would be the Obamacare roll-out and how that failed. Up here in Canada, we’ve had our share of these catastrophic failures as well. I was wondering, in the context of your concept of idea flow and your book, if you could maybe explain what you think the root cause of project failure is in the software industry, and then by extension all of industry because it’s driven by software?

Janelle: I think there’s a few different core problems, but I’d say the one that’s probably more of the root would be our inability to share knowledge as an industry. We’re sharing this solution-centric knowledge with one another, and what I found is it’s not enough. The things that we really need to know in our industry are largely tacit knowledge, and they’re largely learned through mentorship.

As the industry is growing, and more people are getting into the industry, they’re not getting the kind of education they need in school to know how to do these things. We end up with more and more projects with people that don’t have the skills that they need to do their jobs, and don’t know how to communicate these things to management. And management doesn’t know how to understand what’s happening on the software project to make it be successful.

And the combination of what I’m referring to is the “wall of ignorance” between the management world and the engineering world - and how we’re basically trying to explain and describe our problems being completely understood, being one major factor. The other one is not being able to share our knowledge as an industry, or have any kind of universal definition of “better” that we’re aiming for. So it’s like we talk about brevity and modularity and test coverage, but all of these things are really kind of a means to an end, and we don’t have a way to describe what better really means, as the center point in that conversation.

I think that’s one of the big things that idea flow brings to the table. It’s that it gives us a way to share our knowledge on how to do this stuff. Because these projects that fail - it’s just sad. Especially when the taxpayers get stuck with the bill, of just mismanaged projects from people that are running things that don’t know what they’re doing, when that knowledge of how to run a successful software project, is very much knowledge that is within our industry.

If we could do a better job at, one, sharing our knowledge, and then even taking responsibility for public service based projects, and potentially running these things as open managed projects by the industry - as opposed to having the government having no idea how to build software and then running these broken software projects. It’s a tax burden and stuff that is completely unnecessary, because we can solve these problems. It’s not - I should say it’s not that software isn’t difficult, it’s that sometimes the failures are so unnecessary.

Len: It certainly looks that way from the consumer end of things.

Janelle: It depresses me a little bit that we end up wasting so much of our money on things that - for example, if we ran healthcare.gov as an open source project, I think things would’ve turned out very differently. If you basically had one, we could’ve had the thing happen for free, basically based on voluntary support. And because it was something that’s important to the people - I’ll bet you, people do a good job at managing that project. Just because a lot of people care. And we can build societal support based on just humans caring about each other, and helping each other to build the stuff that we need.

Len: It seems to me that - well, with government in particular, there’s the challenge that being skilled at getting government contracts, and being skilled at building software, are two very different, and perhaps, incompatible things.

You talk about the “wall of ignorance” between sort of people doing software development and managers. I think that from conventional management perspectives, there are so many things about software development that are just new and counter-intuitive.

For example, if you’re a typical manager - and I’m going to just stereotype negatively here - you want to have more people underneath you. “I’ve got 10 people underneath me.” “Well, I’ve got 100, you loser,” would be the talk over the bar, right? But in software, it could be that having more people is worse, and actually lowers productivity, and lowers the quality of the work that you’re doing. And you can’t just have more people doing the planting or working the machines or something like that.

For example, a person just sitting there thinking, might be way more productive than a person typing or doing something active. In a very straightforward way, less is more, right? You want the smallest code base you can get, not the biggest. A manager might brag, “Well, my project has eight million lines of code.” And you’re like, “Well mine only has 10,000.” But it could be that what you’ve achieved with those 10,000 lines of code is superior to what the other group achieved with 8 million lines of code.

And so I think that a lot of the ways people measure success are out of alignment with the ways we ought to be measuring success, and that that “wall of ignorance” is partly there. You say ROI and money are the terms that managers are living with. And then the developers are living with - say beauty and efficiency, or something like that, and it just seems to me that one of the reasons those big, especially government, projects fail, is that there’s this concept that management and work are these fundamentally different things, and that’s there from the start. So the sort of the whole thing needs to be thrown out, in order for it to be improved.

Janelle: Yeah. I–

Len: You don’t need to agree with anything I said, by the way.

Janelle: I was just thinking in terms of management. I haven’t really run into any managers that were concerned about trying to measure lines of code as a measure of progress. I think we’ve moved past that era. What I see happening more is, the first problem you brought up definitely, of not realizing the impact of adding people to the project, or assuming that you can replace one developer with another developer. And not realizing the impact and the learning curve, in the effects of familiarity on a team.

Part of the problem I think is that - back to the metaphor of technical debt, that has us kind of thinking about the problems in terms of an interest rate, or like a predictable stream of interest payments over time. And when we think about interest, we think about, “Oh, we’re paying 10% interest, 20% interest”, and it’s this steady, predictable thing. It creates the illusion that we can throw more people at the problem, and distribute the cost across more people - when in actuality what’s going isn’t a problem with a predictable increase in cost, it’s a problem with chaos and a loss of control. And that metaphor is failing to explain the true nature of the problem that we’re trying to solve.

I realized this when I was in a business coaching session with Keith Cunningham, and I was talking about technical debt. And none of these people in the class knew anything about software. But I was trying to explain it in terms of these metaphors, and how technical debt was such a bad thing and what it was like. And the response was, “Well, that doesn’t seem so bad.” And I’m like, “What? Don’t you even want to know like the interest rate or something?”

And what I learned in that world, was that, the way these guys do math, and make decisions at the investment level, just seems so crazy, because it’s so far removed. But it’s like, “Okay, we’ve got revenue minus cost equals profit, and our goal is to raise profit by 10%. Well, how are we going to do that?” Well, you have to basically come up with an investment strategy of how you’re going invest money in all these different buckets to make this prediction come true.

And what I realized in this class was that what makes investment decisions more difficult isn’t an increase in cost, it’s a reduction in predictability. And that chaos and the loss of predictability is much scarier than an increase in cost. And when I started talking to management, I started framing everything in terms of a risk-based decision. As in, “the deadline will be here either way.” And the decision is about where we want to be when that deadline gets here. It’s not about - as soon as you start negotiating things in terms of time, you get back to that metaphor of “How can we throw more people at the problem to make it go faster?”

And then the other thing I still see is people thinking about developers like a commodity resource. And what’s fascinating when you look at the effects of a loss of familiarity on the team, I have one case study in the book I did with this team that was an awesome team. But basically it was like post-exodus phase, where all the original developers left, had this huge breakdown. And then they got new management which brought in some great people. But they had this code base that nobody understood.

And the lack of familiarity cost - 80% to 90% of their time was spent figuring out what to do. And the cost when familiarity walks out the door is insane. I mean a lot of rewriting your software is just to reestablish familiarity, because we can’t work when we can’t relearn the system. And you just get stuck in that state of 80%, 90% friction. And I don’t think - we’re talking about interest rates - you don’t think about stuff that’s taking up 80% to 90% of your capacity. It’s unreal when you start looking at how much time we waste for all these problems.

Len: It’s interesting. You talk about technical debt, getting us to see as nouns what we should be seeing as painful verbs. I really like that, and I was wondering if you could explain - on the subject of technical debt, a little bit more about what you mean by that?

Janelle: Sure. This kind of goes back to the metaphor mapping thing. We’ve got different metaphors that we map in our brain. We’ve got objects, which are basically thing-patterns. We’ve got spatial relationships and context-type patterns. And then we’ve got process-patterns, or things that happen over time. And all these different metaphors have these three basic types.

One of the things I realized is that technical debt is a noun. And the effects of it being a noun mean that we look for things that are noun-like in our experience. And generally we look at, “Okay, so this part of the code is a noun - this technical debt thing. And this is a painful technical debt. This is a low pain or a high pain.” But it’s all one-dimensional. We bucket our experiences in terms of high-pain and low-pain technical debt. But idea flow is a process of understanding and extending the software. And so pain occurs in the context of a process.

And once we shift from a thing-pattern to a process-pattern, suddenly we start seeing our pain in terms of a journey. We’re in these situations. We make these decisions to navigate around obstacles, and have to dig through different areas of friction and break down constraints. And you start to see the problem solving experience, and the true complexity of what’s going on, because we can start relating to journey metaphors, or things that happen in time, as opposed to just one-dimensional things.

If you look at the patterns in my book, they’re patterns of mistakes and patterns of causes of troubleshooting time. They’re not like any patterns we have in our vocabulary at all. They’re patterns in development experience, that I didn’t start to discover until I’d shifted my foundational way. I started thinking about pain as a process, and what were all the factors along my problem-solving journey that ended up affecting my pain?

Len: I’ve got one last question about your book. I was wondering if you could say, for example, to someone who is listening is or has or will be managing a software project - how they can use the concept of idea flow to improve their work?

Janelle: So would this be for a developer or for like a manager?

Len: For a manager.

Janelle: Okay. One thing I’d say is that the techniques I’m proposing in my book need to be developer-owned, in that the development team needs to take responsibility for identifying their problems, understanding what they are, and figuring out how to fix those problems. Learning how to do that and discovering the right problems takes time. If we don’t allocate time to figure out what the right problems to solve are, then we’ll inadvertently end up solving the wrong problems whenever we try and work on improvement.

And if management can agree to let the development team work on the most important problems to solve - whatever those might be, and the development team agrees to gather data to identify what the biggest problems are, and then share the things that they’re learning with management - we just set aside capacity to do that, and that becomes the contract between management and development. Then you can start working your way to a better place, and then shifting that ownership of making those decisions and gathering requirements and solving those problems to engineering. And we can start steering our projects.

We talk about the importance of empowered teams. But it’s not just about self-organizing and figuring out what we ought to do, because we’re all too busy to do that. And so we just shortcut all the structural things, and do what we need to do right now - and worry about all the process stuff later. Because we’d much rather write code.

I think we need to have that structure. And it’s worth putting some time into designing our organizational structure - more like its architecture, and having that be a first class responsibility - of understanding the communication dynamics in our organizations. And start creating more human systems architectural roles, if you will.

I’ve also got some diagrams of modeling using software as a metaphor for human systems design, and some of the design problems in our organizational structure that cause pain. I think using that as a template, even if you do nothing else, and take absolutely nothing else from the book - that one change will make all the difference in the success of the project.

Len: Thanks very much for that. That’s a really great answer. I was just wondering what the next step is for you? You’ve got the book and you’ve got these great ideas that you want to spread, and I was wondering what your next steps are?

Janelle: Wow, I’ve got so many next steps. Right now, I’m doing a speaking tour with No Fluff Just Stuff. I’ve got a five talk lineup that I’m doing, breaking down these problems of how to do data driven software mastery. And then the organizational side, how to build a learning organization. And then my business, to get time to work on it - I’m a very firm believer in that you just get up and do it anyway. And you don’t grow without stepping outside your comfort zone. So I’m learning my way through it. I’ve had things that I wanted to share for a long time. And this book, too, took me five years to write, and rewriting things over and over again, and failing to communicate. Because all this stuff is really abstract. And I can iterate a lot faster on stage too, I’ve found. But it’s kind of a challenge.

But the thing I’m working on right now is building a community at Open Mastery around these ideas, which is an industry peer learning network focused on data-driven software mastery - essentially companies and individuals that are interested in trying out these ideas. I’m working on spinning up local communities, where we can start working on implementing these practices in our organizations iteratively, and then coming together in a local community group, and talk about lessons learned, and codifying what we learned into patterns and principles.

And then we’re building a new vocabulary of patterns on Wikipedia. And we’re hanging out and talking over Slack. And we’re writing blogs, … we have every day, to build a community around - but my goal is to lead an effort in learning how to learn together, with the pain on center stage. And there’s so many problems in our industry with broken feedback loops between the education system and our economic system, and between management and engineering.

If you start looking at a lot of the problems and pain in the world, it all kind of comes back to broken feedback loops. And so, what I’m trying to lead is an effort in learning how to learn together - as an organization, and as a community and as an industry. And hopefully, if I can get this going, as a species. So that’s kind of what I want to do with my life. And I’m hoping that I can find a group of people that are interested in learning along with me. Because it’s a whole lot of fun, but it’s a whole lot more fun to do it with other people.

Len: Thanks very much for taking some time in your life to do this interview. I hope it helps spread your ideas, and helps you achieve that. And I really hope you succeed, by the way.

I guess I’d like to ask you one question about Leanpub, and that’s, why you chose Leanpub for your book? Obviously, something that took five years to write, how you distribute it, and how you publish it, is a very important decision. I was wondering if you intend to keep it on Leanpub as permanently, or are you looking for a publishing contract?

Janelle: So your first question of why Leanpub - I mean, being in software development and Agile and lean being so central to my life, I live and breathe “iterate”. And having a platform that was really easy to get started with, that I couldn’t get in my own way with all the excuses I would make of not doing this is just like - I remember first reading the Leanpub documentation, and how writing your book in Markdown without all these formatting things was a feature.

And at first I was upset that I didn’t have all these fancy tools. And then I started to realize, “Oh, I get what this means now,” of how much I get in my own way from writing my book. Because I’m sitting there fussing with what things will look like, as opposed to just writing the material. And I came to really like the flow and the way that the tools are organized in your content’s organized - it’s just really straightforward and easy to work with. And you focus on writing your book, as opposed to anything else. I’ve very much enjoyed that aspect of the experience. It’s been great. And then I can easily publish drafts and get feedback on things, which I very much needed. My book wouldn’t be anywhere near what it is today without five years of feedback. And some of it not the nicest feedback that you want to hear, but it doesn’t mean that it’s not what you need to hear. But yeah, I’ve been thankful for the tooling, and it’s awesome.

Len: Cool, thanks. I guess I have a tendency sometimes to ask two questions at once, but are you going to be looking for a conventional publishing contract and distributing the book through a conventional publisher at some point?

Janelle: I am. I’m working with O’Reilly right now, so I ended up taking my book and splitting it into two, because I just had way too much content. So what I’m working on right now is - I’m like 80% of the way through a second book, for which my tentative title is, How to Build a Learning Organization, which is kind of the learning framework side of Idea Flow. I ended up focusing Idea Flow just on how do you measure pain? And then the learning framework itself, I’m splitting into another book.

So one of the things that I’ve learned in trying to sell these ideas, is that I’ve got a two-sided audience, one of them being developers, and the other one being management. And so my second book, I’m going to focus more on looking at the problem from a top-down standpoint of what managers need to do to support engineers, and what does the organizational protocol look like from each side of the organization? And then, how do we start building an integrated organizational learning process, so we can create a steering wheel of sorts at the organizational level - which is a hugely complex problem.

But I’ve got an initial set of blueprints as a starting point, and then I’m using Open Mastery as a vehicle to get the community to take ownership of getting from point A to point Awesome. And so, let’s see. What was your original question, I forgot.

Len: It was, are you going to be looking for a conventional publishing contract?

Janelle: Oh, yeah. So I’m working with O’Reilly, and I’ll keep my book on Leanpub probably as long as I can. But my main goal is trying to get my book in front of as many eyeballs as possible, more than anything else. And working in partnership with a giant marketing machine has its advantages.

Len: I just wanted to say on that note, for us at Leanpub, when someone starts their book on Leanpub, and then ends up going in the end with a publisher, and retiring their book on Leanpub, almost nothing makes us happier. I mean that is just a fantastic outcome for the author and for us, because part of our goal is to open up that time when a book is normally hidden in stealth mode on a person’s hard drive, to the public and to people.

If all books were published while they were progress, and then get taken up by publishers at the end, it’s a win on all sides for everyone, including readers and authors.

I guess my last question is selfish. I mean, speaking to someone who thinks so much about improving processes, if there were one thing that you could ask us build or to improve about Leanpub, what would that be?

Janelle: Probably the thing that I feel like I’m missing the most, that I’ve been trying to make up for in other kinds of ways, is the ability to build a community around the book, that’s integrated with the tooling. I know there’s kind of email list setup, but when people buy the book - unless they check the little box of share their email. it’s really hard to create relationships out of that. And I’ve tried to come up with some other ways to work around that with signups through other kinds of tools to collect emails, so I could stay in touch with the community. But that’s probably the big thing I feel like is in my way as an author is, building, getting the community of people around the book talking to each other, as opposed to just this one-way communication. And I realize there’s discussion comments and stuff, but it just has a very different kind of feel to it.

Len: Thanks very much for that. You’re the second author I’ve interviewed just in the last couple of months to talk about community when asked that question. So that’s a very strong signal, and it’s actually something that we’ve been thinking really hard about. Hopefully, somewhere in the near to medium term, we’ll have a better community experience. As you noted, we have a sort of double-blind “email the author” process if people want to use it that way. And we’ve got Disqus comments if authors don’t turn it off.

But there’s just so much more we know we can do around that, including from our perspective, we’re sort of building the world’s first library of in-progress books. And what does that look like? And what is the importance of community around that? There’s lots of interesting questions around, for example, reviews, right? Back to 1996 Amazon. But what do you do with a review of an unfinished book, when a new chapter is added, or something like that? There’s an interesting aspect of time that’s involved. And it’s just so interesting to think around building community throughout that time, and around an unfinished thing.

Janelle: Yeah, I can see that being a really difficult challenge. I think about some of the early feedback I got that was just awful feedback - stuff that you’d never want to make public. Like, “I have no idea what this book’s about, or what you’re trying to say.” It was like, “You just need to rewrite this whole thing from scratch. It makes no sense.” I mean, feedback like that, you don’t really want to necessarily be a big public spectacle that you can never get rid of. I think there needs to be some privacy around certain aspects of feedback, especially - I don’t think you’ll get the kind of feedback you need in a lot of cases unless it’s private.

Len: That’s a really good point.

Janelle: It’s like there’s two different– Yeah, I’m not sure. It’s an interesting challenge you guys have.

Len: Well, that’s a really interesting. I hadn’t thought of that before. Because yeah - one of the processes that’s pretty commonly used by Leanpub authors, is to put their email address in the introduction to their book, and say “Please email me.” And then that is almost always a private communication. But I hadn’t put it together, that one of the reasons that’s actually - it sounds so clunky - but the reason it works so well is that it has this value of being private.

Janelle: Yeah.

Len: That’s really interesting.

Well, I think this is my first feature length interview! I want to thank you very much for your time, and for all your great answers to questions - and for your book as well. So, thanks very much for being on the Leanpub Podcast and for being a Leanpub author!

Janelle: Thank you for inviting me. This was really fun.

blog comments powered by Disqus