Fantastic Elastic 3
Free!
Minimum price
$9.99
Suggested price

Fantastic Elastic 3

Level: Advanced Automate putting Eurovision data on Kibana dashboards From no data to wow Kibana!

About the Book

This is a book by a QA (Quality Assurance) Engineer who loves Kibana.

 

It promises a fun, interactive reading experience for data fans, who can follow along just by reading, or trying out some or all of the steps for themselves.

 

If you are thinking of using Kibana, or about taking the Elastic Engineer Certification at some point, this could be a fun starting point.

The book breaks down the journey as follows

  1. Part One: Spin up Elasticsearch, Kibana and a Node app running together - STATUS: Complete
  2. Part Two: Build out the Node app so that it can scrape data from an outside source and ingest it to Elasticsearch - STATUS: In progress
  3. Part Three: Automate within reason the creation of charts, graphs and dashboards in Kibana using this data - STATUS: To do
  4. Part Four: Think "Containerised" when working on projects - STATUS: To do

If you feel like a complete beginner and want to keep things very simple, feel free to start with

Fantastic Elastic - Level : Beginner

or

Fantastic Elastic 2 -- Level: Intermediate

 

If you have read those books, this book starts the journey again, and digs deeper into automating the steps as simply as possible.

About the Author

Anita Lipsky
Anita Lipsky

Anita is a Quality Assurance (QA) and Software Test Automation Engineer who works on development teams using Agile and software best practices in Oslo, Norway. She is a speaker, blogger and founder of www.purplebugs.com, and an active participant in both the Testing and Girls Can Do IT communities in Oslo.

Anita has a degree in Information Technology and currently holds three ISTQB qualifications: ISTQB Certified Tester Foundation Level, ISTQB Agile Tester Foundation Extension and ISTQB Advanced Level Test Automation Engineer.

Since starting writing her Fantastic Elastic book series, Anita attended the Elasticsearch Engineer I and II classes, and is now an Elastic Certified Engineer.

You can connect with Anita via email or on LinkedIn.

Table of Contents

  • Book Blurb
    • Book cover text
    • Teaser text
    • About the book
    • About the author
    • Not an official Elastic product
  • Dedication
  • Introduction
    • Purpose of this book
    • Tech stack
    • Who this book is for
    • How to use this book
    • This book is being written as I go
    • How to give feedback
    • Summary
    • Australian spelling
    • Coming up…
  • Overview of steps
  • Part One overview of steps
    • Steps to Spin up Elasticsearch, Kibana and a Node app running together
  • Use Docker to spin up Elasticsearch and Kibana
    • Why Docker
    • Optional: The Docker steps are available in git-hub
    • Steps to take for a quick way to run Elasticsearch and Kibana
    • Start Elasticsearch and Kibana with Docker in a quick and unsafe way
    • Download and run Elasticsearch in a container
    • Download and run Kibana in a container
    • Verify Elasticsearch and Kibana are up and running
    • Stop the containers and remove all data
    • Celebrate!
    • Coming up…
  • Use Docker Compose to spin up Elasticsearch and Kibana
    • Steps to take
    • Install Docker Compose
    • Define the steps in the Docker Compose script
    • Optional: Copy paste the contents of Docker Compose from git-hub
    • Run the script in one command
    • Optional: Stop the script in one command
    • Celebrate!
    • Coming up…
  • Build a simple standalone Node app and run it
    • Optional: The standalone app can be found on git-hub
    • Steps to take
    • Create a directory for the app
    • Within the directory, initialise Node
    • Install the node-fetch module
    • Write the app in javascript
    • Run the app
    • Verify the app works as expected
    • Stop the app
    • Celebrate!
    • Coming up…
  • Update the Node app to output responses in a websever
    • Steps to take
    • Use the npm module express for the webserver
    • Update the Node app to output the response to the webserver
    • Rename the file to be server.js
    • Verify the app runs as expected
    • Celebrate!
    • Coming up…
  • Create a Dockerfile for the Node app
    • Steps
    • Stop the standalone node app
    • Create a Dockerfile for the Node app
    • Build the Dockerfile to create an image
    • Verify the Docker image is created
    • Run the Node app based on the image
    • Verify the Node app works as expected
    • Stop the Node app
    • Celebrate!
    • Coming up…
  • Update Docker Compose to include the Node app
    • Steps
    • Stop the currently running Docker Compose
    • Update Docker Compose to include the Node app
    • Optional: Copy paste Docker Compose from git-hub
    • Test out running Docker Compose
    • Update the Node app to be able to communicate with Elasticsearch’s containerised network
    • Celebrate!
    • Coming up…
  • Verify running Docker Compose shows the Node app interacts with Elasticsearch
    • Steps
    • Version the Node app to 1.1, rebuild and re run Docker Compose
    • Verify the Node app works as expected
    • Celebrate!
    • Coming up…
  • Part One Celebration and recap of what was done
    • Happy path recap of steps
    • Shorter recap of steps
  • Part Two overview of steps
    • Build out the Node app so that it can scrape data from an outside source and ingest it to Elasticsearch
  • Node app ingest from JSON file
  • Data scraper Part 1 - Build out the Node app so that it can scrape data from the web
    • What is HTML?
    • What is the DOM?
    • Use the git-hub file if you want to skip ahead
    • Steps to take
    • Create a directory for the scraper module
    • Create a skeleton Hello World app using the jsdom module
    • Run npm init to initilise the module
    • Install the jsdom npm module for data scraping
    • Test the scraper module
    • Setup scraper to access the Wikipedia page contents using the node-fetch module
    • Optional: Play around with using javascript and the DOM
    • Output one row from this table
    • Optional: Play around with loops and storing data in a sensible object format
    • Celebrate!
    • Coming up…
  • Data scraper Part 2 - Fleshing out the data scraper
    • Why not take data from some other source?
    • Steps to take
    • Loop through all rows, getting at least one column and storing to a sensible data format
    • Handle non conforming data in order to continue scraping and cleaning data from the next columns
    • Continue scraping to get first six columns, skipping non confirming data
    • Celebrate!
    • Coming up…
  • Data scraper sends data to Elasticsearch
    • Steps to take
    • Create new folder to contain an app that uses scraper.js
    • Update scraper to have a function that can be called in main application
    • Update the start() function to use a Promise
  • Test the standalone scraper
    • Update the server app to use the scraper
    • Test the server app
    • Send POST request containing the JSON from scraper to the Elasticsearch API
    • Celebrate!
    • Coming up…
  • Data scraper Part 3 - Update the data scraper to get all the data required
  • Part Four steps
    • Automate within reason the creation of charts, graphs and dashboards in Kibana using this data
  • Think “Containerised” when working on projects
  • Appendix - Handy Commands
    • Remove a Docker image
  • Appendix - Key Concepts
    • Docker Key Concepts
  • Appendix - Running different versions of the offically supported Node, Elasticsearch and Kibana applications
    • Steps to take
    • Tear down the current environment
    • Update the version of Node that the custom Node app depends on
    • Rebuild the custom Node image
    • Update the versions of Elasticsearch and Kibana
    • Spin up the environment using these versions

Authors have earned$9,886,269writing, publishing and selling on Leanpub, earning 80% royalties while saving up to 25 million pounds of CO2 and up to 46,000 trees.

Learn more about writing on Leanpub

The Leanpub 45-day 100% Happiness Guarantee

Within 45 days of purchase you can get a 100% refund on any Leanpub purchase, in two clicks.

See full terms

Free Updates. DRM Free.

If you buy a Leanpub book, you get free updates for as long as the author updates the book! Many authors use Leanpub to publish their books in-progress, while they are writing them. All readers get free updates, regardless of when they bought the book or how much they paid (including free).

Most Leanpub books are available in PDF (for computers), EPUB (for phones and tablets) and MOBI (for Kindle). The formats that a book includes are shown at the top right corner of this page.

Finally, Leanpub books don't have any DRM copy-protection nonsense, so you can easily read them on any supported device.

Learn more about Leanpub's ebook formats and where to read them

Write and Publish on Leanpub

You can use Leanpub to easily write, publish and sell in-progress and completed ebooks and online courses! Leanpub is a powerful platform for serious authors, combining a simple, elegant writing and publishing workflow with a store focused on selling in-progress ebooks. Leanpub is a magical typewriter for authors: just write in plain text, and to publish your ebook, just click a button. It really is that easy.

Learn more about writing on Leanpub