Andy Grove, Author of How Query Engines Work: An Introductory Guide
A Leanpub Frontmatter Podcast Interview with Andy Grove, Author of How Query Engines Work: An Introductory Guide
Andy Grove is the author of the Leanpub book How Query Engines Work: An Introductory Guide. In this interview, Leanpub co-founder Len Epp talks with Andy about his background, how he got his first programming job at the age of sixteen, moving the US from the UK, what query engines are, his book, and at the end, they talk a little bit about his experience as a self-published author.
This interview was recorded on February 9, 2021.
The full audio for the interview is here: https://s3.amazonaws.com/leanpub_podcasts/FM172-Andy-Grove-2021-02-09.mp3. You can subscribe to the Frontmatter podcast in iTunes here https://itunes.apple.com/ca/podcast/leanpub-podcast/id517117137 or add the podcast URL directly here: https://itunes.apple.com/ca/podcast/leanpub-podcast/id517117137.
This interview has been edited for conciseness and clarity.
Transcript
Len: Hi I'm Len Epp from Leanpub, and in this episode of the Frontmatter podcast I'll be interviewing Andy Grove.
Based in Broomfield, Colorado, Andy is a software engineer who specializes in work on query engines and distributed systems, who currently works as Principal Distributed Systems Engineer for NVIDIA, .
You can follow him on Twitter @andygrove73 and check out his website at andygrove.io.
Andy is the author of the Leanpub book How Query Engines Work: An Introductory Guide.
In the book, Andy introduces readers to the topic of query engines generally, and shows you how to build a working SQL query engine.
In this interview, we’re going to talk about Andy's background and career, professional interests, his book, and at the end we'll talk about his experience as a self-published author.
So, thank you Andy for being on the Leanpub Frontmatter Podcast.
Andy: Thank you for having me.
Len: I always like to start these interviews by asking people for their origin story. So, I was wondering if you could talk a little bit about where you grew up, and how you first became interested in computers and technology?
Andy: I grew up in a small village in England, not too far from London. And around the age of eleven or twelve, my parents decided to send me on a course over the summer, just like a two or three day thing at my local library, where they were teaching computer programming.
I had no exposure to computers before this time. And after three days doing it, I was kind of hooked. Pretty soon I decided that's what I was going to do with my life - I was going to be a programmer, as we called it back then. So that's where it started.
I had intentions to go to college, but I ended up leaving school at sixteen, which is when high school actually finishes in England. I went straight into a career as a junior programmer, and worked my way up from there. That was a very long time ago, about 30 years ago.
Len: That's really fascinating. So your first programming job was when you were sixteen or seventeen?
Andy: Yeah. Actually, I was sixteen when I started my job. I was incredibly young, commuting into London every day. I was learning so much, and really enjoying it - barely earning any money of course. I really felt good to be doing what I was passionate about, and getting paid to do it.
Len: And what was your first job?
Andy: I worked for a finance company, and I helped build litigation systems. That's for reporting. It's basically database work, using some really old technology called dBase, which was a language. It allowed you to process data in files. This is IBM XT computers, way before client-server registered systems. The company was mostly mainframe-based, and PCs were kind of a new thing they were experimenting with. Len: Just to set the timeline here, I'm just looking at your LinkedIn profile here - yeah, your first job then was in 1989.
Andy: Right.
Len: So you were there at a period of time that would be kind of unrecognizable to a seventeen-year-old getting into software engineering now.
Andy: Absolutely. I remember being really excited when this thing called the internet was invented, and the World Wide Web. It was suddenly much easier to learn new skills. I no longer had to go down to my local bookshop -
Len: I was just going ask, what was it like learning? I mean you were obviously learning on the job. And so you would buy books. Did they have a corporate library, or anything like that?
Andy: No, this company was - it was a pretty small company. And yeah, it was very hard to learn. The only way to really learn at the time is mostly through working with other people that had more experience. Which was frankly, everybody else at the time. And yeah, going to the library or going to a book store.
Len: And did you find it exhilarating or intimidating to commute into London every day as such a young person?
Andy: Yeah. I mean, a bit of each. More boring after a while of course. And being England, the weather isn't the best, so it was kind of a drag standing on - I have these memories of standing on the train platform, in the freezing cold and rain in the mornings.
It's kind of going off topic, but I had the option later on in my career to start working in the US, and had some trips there. And I noticed that everybody attempted to drive to work or there's less train commuting. And I wonder if that was one of the things that kind of made me kind of want to move out here.
Len: I'm looking forward to asking you about that move in a bit. But yeah - long time listeners of the show know that I spent some years living in London myself.
Andy: Oh, okay.
Len: And I still remember commuting. Usually I was on the Tube, and particularly on the Northern Line. I remember a couple of interesting things about it. One, about how much it sucked. I mean, it really sucked. Like sometimes two or three trains would go by, before finally one or two people might squeeze out, and you squeeze in. I mean, it's sort of hard to imagine already in COVID times, but like, you were physically pressed up against many different people.
And one of the curious things about commuting in London - I don't know if it's like this now, but at the time in the late nineties, you could be on a platform with like 500 people, and it was dead quiet. Because no one would talk.
The only time people would talk, would be if you were on this crammed train and the announcer started saying something and no one could hear, and then everybody would laugh. But other than that, it was dead quiet.
Andy: Yeah.
Len: It was just a curious feature of commuting in London. But yeah - a very unpleasant experience, whether you're on the train or on the Tube. And I can see how driving would be very attractive as an alternative.
So, you did eventually move to the US. I've actually spoken to quite a few people who've moved from one country to another on the podcast. What was your experience like? Was it easy, was it hard?
Andy: It was, I mean, a bit of each. So they obviously speak the English language, that made it very easy. I know some people who've moved where they have to learn a language, and that's definitely a challenge. So from that point of view, it's fairly easy.
I came over to work for a very small company, a startup. It wasn't very corporate, so I didn't have a kind of relocation package. So that made things kind of challenging on financial fronts.
But I knew that I would have - or at least I had a strong belief that I would have better career opportunities over here. I mean, there are lots of software jobs in London and the UK. The software industry itself, as you know, is mostly kind of US-based. So I took the risk. There'd be some short term pain, but there'd be longer-term benefits for moving here.
Len: And did you move to the Denver area?
Andy: I moved straight to Broomfield, Colorado. The guy that I was going to be working with is based here. My wife and I were checking out the local school district. So those were really the main reasons. It's very hard to choose an area to live in another country that you have no experience with, and we had to pick somewhere. We chose Broomfield, we're still in Broomfield - so, yeah, I guess that worked out well?
Len: Was there any culture shock?
Andy: Yes, there was. The things that are really a big contrast for me from England - they're pretty obvious ones. The healthcare system, guns. Those are really the main two that were kind of shocking to us, just the differences. And in those early years, again, not working for a big corporation - we didn't have amazing health insurance. We had some surprises there, for sure.
Len: I've heard that before from people. The sort of shock of going - I mean, either way. Like from the States to a place with public healthcare, or from a place with public healthcare, to the United States. I think one of the best descriptions, or most evocative descriptions I had, I heard about it - was from an American professor - a professor at a Canadian university who moved there from the United States, who said, "There's an edge to life in the States that I don't sense here in Canada." And she said, "I think it's probably partly down to the healthcare thing". She was speaking specifically about her students, right? They didn't seem to have this kind of sense of a cliff they might fall off, if they don't get everything just right.
Andy: Oh, that's interesting.
Len: One question I actually always like to ask people, if they're in IT, is - the formulation of the question depends on how it went for them - but if you were starting out now with the intention of having a similar career to the one that you've had, would you choose to study Computer Science formally, at a university? Or would you choose another path?
Andy: That's a great question. I have three children. My oldest son is actually now studying a Computer Science degree, which I strongly encouraged him to do. I was self-taught, and I've learnt a lot from working with some great people over the years. But so I feel that - if I had had the opportunity to go to college, I would've taken it. I think it does - there is a benefit to learning more of the theory early on, for sure.
Len: Thanks very much for that very clear and straightforward answer. Sometimes it's a little bit more convoluted. But that's great.
Just before we go on to talking about your book, I wanted to take an opportunity to do a little segment that we introduced way back in March, where we ask the guest how the pandemic has been affecting them.
So, it's been quite a few months now. Could you talk a little bit generally about what it's been like in the area where you live in Broomfield? And how, or whether, the pandemic has affected your professional activity?
Andy: Sure, absolutely. So Broomfield, I mean - COVID's pretty bad. More recently, the numbers are going down. But from the start, my family - we decided to be pretty cautious about the whole thing. So originally were just doing the essential trips - groceries and so on. But the interesting thing was, I actually started my role at NVIDIA back in March. So I had two trips out to the offices in Champaign, Illinois. I think it was in February for an interview. And then I went back a few weeks later for my first week in the role.
And the initial plan was, I would be doing more of these trips, to spend more time with the team. But this is just as the world was realizing the kind of seriousness of the COVID situation. So I ended up not doing that.
That was definitely a challenge, starting a new role with a new team, and being fully remote. I would just prefer to have more time with the team early on, to get up to speed. So that was a challenge, for sure.
But I'm used to working from home. I've actually worked mostly from home for the last - gosh, I want to say - fifteen years now. So that side of it was quite easy for me. I'm familiar, I have a good home set up... [?] And so that part wasn't so bad.
Len: And so did people start wearing masks very soon in your neighborhood?
Andy: Not really, no. In fact, I remember my wife and I - we were at Whole Foods one time, and we were pretty much the only people wearing masks. We decided earlier on that it might be a good idea. I remember the staff kind of suppressing their laughter serving us. And, of course - four weeks later, they were wearing masks too. I mean, it doesn't hurt to be cautious in these situations when there's a lot of - there were a lot of unknowns at the time about how the virus was being transmitted. We have children, we played it safe.
Len: Thank you for sharing that. One thing I've noticed - and this is just sort of anecdotal - but people who work in the tech industry, seemed to be a little bit - I mean, well, early adopters of anti-COVID measures. It certainly became a reality for me long before it did for many of my friends and family.
Specifically here where I live, on Vancouver Island in the City of Victoria, it's only in the last couple of months that people have actually started regularly wearing masks outside. I would say it's about half. I don't, actually. If the convention turned into, "Hey, yo, Goober, put on a mask," I'd do it in a heartbeat. And it looks like it might finally be getting there. But I know that in some places like New York City and stuff like that, it was like, "We wear a mask like outside all the time," from the beginning.
Andy: Right.
Len: Alright, so just - moving on, before we go on to talk actually talk about your book - NVIDIA sounds like a pretty exciting company to work for. What's your role there generally, at a high level?
Andy: Sure. So there's a project called Apache Spark, which is an open source project. It's a query engine distributed system. And NVIDIA - as you know, make GPUs, chips that makes things run faster - and are really good at running things in parallel. The projects I work on, we make Apache Spark run on NVIDIA GPUs. At a very high level, that's basically the role. And obviously, the subject matter there is very similar to my book, and previously working on open source projects around query engines. That essentially helped me get the job at NVIDIA. Because I had the relevant experience.
Len: I'm sure we'll get maybe into a little bit more of the details of that when we talk about your book.
So, you've published this book, How Query Engines Work: An Introductory Guide. I thought maybe the best place to begin would be where you begin in the book - which is, what's a query engine?
Andy: A query engine is basically a tool that lets you run queries against your data. So, obviously, asking questions to get value from your data. So it could be running SQL statements against your files that you have. You could be running these queries just within the process on your laptop, or if you have terabytes of data, you'd want to be able to run this query across a cluster of servers, so that you can process the data and carry on in the query more quickly.
Len: And so, to sort of put into like, from a non-technical perspective - and I'm a non-technical person - I mean, I've done some programming, and I've interviewed 150 people who are very technical for this podcast so far - so I know a little bit - but basically, there's some information stored somewhere on a disk, or on many disks, and that information is structured in a certain way, and someone has to make a decision about how it's structured. But then, if you want to go ask it a question, like, "How many books have I sold in the last year?" Or something like that, right? You've got to have some software that knows how to ask that particular structure that you've chosen particular questions, and get the right answer back?
Andy: Absolutely. Many people will be familiar with traditional SQL databases and the way they operate. They provide a very structured way of dealing with data.
Typically, your first job is to get your data into the database system, so importing your data somehow. And then you can run queries within the database. DAtabases contain a query engine. They also have a storage engine and transactions, and all these things. I guess my area of expertise more is - kind of modern query engines, where you don't have to import the data first. Your data is just where your data is, and you want to run the queries against that data, without having to import it first.
And so, a query engine basically translates the question you're asking. The query, it translates it into this query plan, with all the different steps that the query has to do. And at the very bottom of this plan is accessing the data in some file somewhere. So maybe it's a CSV file? Maybe it's a Parquet file? Some of these files have schemas built in, some don't. So if it's a CSV file, for example - CSV files are just strings [delimited by commas?]. So often you'll have a CSV file, these are the ones to tell the query engine, "Hey, here's the schema for this file. Column two is a decimal type. Column three is a string," and so on.
Len: And there's a difference between columnar data and row. I don't know actually, have the language for it. But if you can go into a little bit about that, that would be -
Andy: Sure. Let's go back to a CSV file or something that most people are familiar with. Or a spreadsheet. So it's typing the data, and there are rows and columns, and you can look it either way. And historically, many query engines have been row based. So processing one row at a time is a very natural way to think of that data. As if you were processing - I mean, like you have books and authors sales, and if you wanted to see how many sales there are by author, just go through the sales one by one, and process it that way. And that's fine.
But columnar data has advantages. Basically, if you read data, a column at a time, it will process it a column at a time.
Let's take a really simple example. You're looking at all the sales for a particular book, you want to know the total. If you're processing this columnar data, you can read the column of all the sales amounts into memory. So all these numbers are next to each other in memory. So just walking over them is very efficient, because - continuous.
And when you want to do a sum of those numbers together, you can take advantage of a feature in CPUs called "SIMD," which stands for Simultaneous instruction, multiple data. A very fancy term, but basically it means you can give the CPU one instruction and it can add up multiple numbers at the same time, rather than doing one at a time. So you get a good performance improvement from doing that. And then, it's a process - it's a concept of indirect rows processing.
And if you guys can see if you use a GPU, this is what GPUs are - GPUs you load a ton of data on, and it can process - it can run code against that data with a very high level of [?]. Modern GPU's have thousands of cores. So, once your data's on the GPU, you can process that data group. So that's really why people now are moving towards columnar data - to take advantage of modern hardware.
Len: And I think you write in the book about that there are limitations of structured query languages, like for SQL, when it comes to so called "Big Data." I was wondering if you could talk a little bit about how it is that a structured query language can have limitations when it's dealing with like tons and tons of information?
Andy: Sure. I guess there's a couple of different ways of looking at limitations of SQL. I mean, SQL is - I think, the most widespread query language that many people would be familiar with. And it's really great for certain queries. But then, there - I guess there are several limitations. One, in the language itself. There are times when you really you need to have your own - you need to write custom code to do some particular operation you need that isn't part of the SQL standard. And SQL does... [?] user-defined functions.
But sometimes you really need to take control as a developer, and there just may be things you can do more efficiently that you can't do in SQL. That's one kind of category of limitation.
And I think the more interesting parts of the questions is the Big Data part. So, SQL is very flexible. And you can take data from different files and join them together in different ways to build these really complicated queries. And once your data is distributed across lots of servers on the network, you have to be really careful about how you run the query, because data is going to need to move around between servers to fulfil certain parts of that query. And this is when there can be a ton of [overhanging?] distributed systems, depending on the type of operation that you're performing.
This is really where this kind of modern age of distributed query engines comes in. They need to figure out how to optimize the query in such a way as to minimize data movement - do as much processing in parallel on each computer, before the results get to the next stage in the query and so on. So -
Len: And, I'm going to put this very naively, but, what you're actually trying to do really matters for how you decide to set things up and make things work. So for example, if I'm a self-driving car and I'm asking questions about the data that I'm receiving in real time from my LiDAR or something like that - how I structure the storage of that information as it's coming in, and how I design the way I'm going to be asking it questions - like, "Is that a person on a bike or is that a hedge?" That's going to matter a lot for how I set up my system.
Andy: Yeah, absolutely. And so, one important thing with query engines, is how you partition your data. Taking a naive example. You want to get the total number of sales of books per country. If you happen to organize your data where each country is its own set of files, then it makes a query kind of efficient. Because you know - you can just run the same query in parallel across all these datasets that were already organized by country. And then just combine the results.
If your data isn't partitioned that way, you've got to ask all your servers to do the same query, and they're all going to produce some sums for all the countries. Then you have to take those results, put them together - and then run another query on those, to get the final sum for each country. So that's kind of a good example of how organizing the data in a certain way can make the queries more efficient.
Len: And is this related to what a type system is? I was wondering if you could talk a little bit - you talk about these in your book - what is a type system?
Andy: Sure. So, not really. A type system is really representing the different data types that you're dealing with. So we have text data, we have numeric data, you can have structured data which is - we have a concept of maybe like an object, it has different attributes. So, maybe a car is the structured type, and it has attributes like engine size, horsepower, and so on. And when you're querying data, potentially you're querying all these different file formats. So if you're talking about CSV and Parquet, CSV has strings, but Parquet has its own data types. It has it's own flag system. And the SQL language has it's own types as well.
When you're building query engines, this is a big concern. Because you're very often converting between these different types. And that can get kind of complex. So if you're querying a CSV file and a Parquet file in the same query, you've got these type conversions going on. And really, it's good to pick - when you're building a query engine, you certainly have to pick one type system to be the one the query engine actually understands. And then you have to convert all of the inputs into that type system. And that's - Apache Arrow. So I'm heavily involved in the Apache Arrow project. And the query engines that we've been building are using Apache Arrow as the type system. And it's a columnar memory format, basically.
Len: And can you talk a little bit about the Ballista project? I hope I'm pronouncing that correctly?
Andy: Yeah, absolutely. So, just a bit of background to this. It's kind of a long and windy story. But, so I started out - at the start of 2018, I decided to start building a query engine in the Rust programming language. I called it DataFusion. And very early on in that journey, I decided to make it columnar. And I was looking for a type system and somebody pointed me towards the Apache Arrow Project. And after my study of Apache Arrow, it seemed like a really good fit.
But it wasn't the Rust implementation available. And I mean, Apache Arrow has implementations in many programming languages, but Rust wasn't one of them at the time. And so I ended up building my own kind of version in Rust. And later on, I donated that to the Apache software foundation, so that became the starting point of the Rust implementation of Arrow. And it's come along a long way since then. There's a community behind it. So I just kind of got the ball rolling though. So with that in place, I was able to then make more progress with DataFusion and s o I got that to a point. So basically DataFusion is a query engine that supports SQL. It's implemented in Rust and uses the Apache Arrow memory model. And once I got that to a certain point, I also was able to donate that to the Arrow Project. That's actually part of Arrow now. And so there's two kind of building blocks in place now. There's the Arrow assigned system in Rust. There's this DataFusion query engine in Rust.
And that allowed me to move onto Ballista, which was always the kind of long term goal, of building a distributed query engine. So, with Ballista, I'm trying to take the work that's been done with DataFusion, which runs on a single computer, and make it run distributed on many computers, basically.
Len: And what's the advantage of running a query engine on distributed computers?
Andy: So basically, if you have more data - people talk about this concept of Big Data, which is a very vague term. My definition, I didn't make this definition - but the definition I've seen that I like the most is, Big Data means there's more memory than you can fit into memory on your computer. So if you have a laptop with maybe 32 GB of RAM, Big Data is submitting over that. And if you have terabytes or petabytes of data, you can't even fit it on the disk on your laptop. So this is where you need a cluster of computers to be able to process that data.
Len: Great, great. And what was the inspiration for writing the book? Were people asking you, "Can you write an introductory guide, or step us through the process of building one of these?"
Andy: There were a couple of factors. I had to go through this journey. I had to teach myself how to build a query engine by building a query engine, and just figuring out stuff as I went along. It was kind of a painful journey. I looked at some open source query engines, and I found the documentation was pretty sparse. It seemed to be that the people building the project would have deep understanding, and you'd have to work with these people for a period of time, to learn how the design philosophies were.
And I just thought it'd be a really good one. I had the knowledge, I just really wanted to get it down for other people to learn from. And I was also looking to build a strong community around Apache, around DataFusion. I figured, what better way?
I designed DataFusion with a certain design philosophy. So I figured, why not write a book explaining from start to finish my approach for building a query engine. And then that allows people to - it makes it much easier for people to get up to speed with the overall design, and makes it easier for people to contribute to the project.
Len: And just moving onto the final part of the interview, where we talk about the actual practical experience of writing and publishing a book. Can you talk a little bit about why you decided to choose Leanpub as your publishing platform, out of all the other platforms out there?
Andy: Sure. So it was actually, I guess, kind of a random event. I was talking to a guy called Matthew Powers, who has written a Leanpub book. I forget the actual title, but it was something like Writing Beautiful Apache Spark Code.
I was chatting with him, and he told me about Leanpub, he was very excited to share it with me. I was really impressed, and I decided to give it a go.
And yeah, I hadn't considered writing a book before. I would've been very worried about just learning about using different tools, or the process of writing a book. But I could see it with Leanpub. I was very used to writing Markdown files already for documentation. I could see how simple the process was of turning these Markdown files into a book, and pushing it out to a platform where people could buy it - it kind of seemed too good to be true, in a way.
Len: I apologize for not knowing the answer to this next question already, but did you publish the book in progress, or did you publish it all in one big chunk?
Andy: I did publish it in progress.
Len: Okay.
Andy: That was one of the really attractive things for me about the platform. I mean, as a software developer, everything I work on is always in progress. There's no such thing as a finished software product, and I really liked that approach to the book - that I could publish sections early and get feedback from people, which would help me improve the book. So that was really great.
And I would say - so, although the book is complete in some sense of the word - all the chapters are there, and there is content, and there's a start and a finish and there's a good flow. But I definitely intend to improve the book, and have more content over time. Because I'm still on this journey of learning about query engines, and there are sections that I'm learning about right now, and I'm looking forward to expanding those sections in the book.
Len: And how did you get feedback from people as you were publishing, or how are you currently getting feedback from people? One thing for people listening, we - at Leanpub, we very much encourage authors to interact with readers to help them improve their books or decide what to write next, or decide what to stop writing or what not to write. Things like that.
Andy: Yeah, absolutely. So fairly early on, I was actually asking people on Twitter. I was asking people what did they want to hear about in the book. Were there particular things they wanted me to write about? I also encouraged people to give me feedback as an author, in Leanpub forums. And that's been really great to have that, some really good feedback.
Len: Oh, so you've been using the forums?
Andy: Yes.
Len: Yeah, just to explain this, so - to anyone listening - after you publish a Leanpub book, you have an option to create a forum online. We use the platform Discourse. Then, that lets people ask questions about the book, or provide feedback. And so, people have been using that. That's great.
Andy: Yeah, that is.
Len: Okay, that's fantastic. Though the forum I spend most time on is our authors' forum and answering authors’ questions, which we have from people as well - alright.
Well, I guess the last question - I always like to ask the guest on the podcast, if they're a Leanpub author, is - if there was one thing we could build for you or one thing you really hate about Leanpub that we could fix for you, what would you ask us to do?
Andy: Wow, that's a tough question. I haven't really run into any issues. It's a simple platform to use. I guess maybe one thing, and this may be an area that I just don't know what the capabilities of the platform are. I wonder how you use it for people to - like if I wanted to sell the book through - if people have Kindles and they wanted to buy the book and read it on the Kindle, I don't know if that's something that they can do easily today with platform.
Len: That's a great question. Thank you for asking that. Specifically, when you use Leanpub to write a book - whenever you click the button to preview or publish, we create it in a PDF, EPUB and MOBI. And we absolutely encourage people to distribute their books as widely as possible. I mean, in our ideal world, all the eyeballs would be on Leanpub, but they're not.
And so, if you want to sell your book to people, you can just take that MOBI file and you can sell it on Amazon if you want. You can take that EPUB file and you can sell it on Apple, on their Books platform. We very much encourage that.
In fact, it's not a requirement, if you write a book on Leanpub, that you actually publish it on Leanpub at all. Some people don't do that. They write their book and then they create the previews that no one can see, just you. And then you take that file and you go onto the service that you actually want to use. And that's perfectly fine.
We also actually have a pretty popular feature, which is our Print-Ready PDF output, which basically means that when your book is done - and of course, it's never really done-done, right? But when it's close enough that you want to commit to print, you can actually just click a button - I mean, you adjust some settings and stuff. But in the end, you just click a button, and we give you the PDF file that you need to upload to services like Amazon or Lulu or something like that, or Ingram. So that people can get print-on-demand copies of your book as well.
So, we absolutely encourage people to use different platforms. And we try to provide them with everything that they need in order to do that.
Andy: That's very cool. I didn't actually know all of that, so I've learned something new there, that's great.
Len: That's great - well that's, and that's our fault for not communicating better about it.
Well, Andy - thank you very much for taking the time to have this great conversation today. And thank you very much for being a Leanpub author.
Andy: Thank you for having me, it's been a pleasure.
Len: Thanks.
And as always, thanks to you for listening to this episode of the Frontmatter podcast. If you like what you heard, please rate and review it wherever you found it, and if you'd like to be a Leanpub author, please visit our website at leanpub.com.
