Learn the Alexa dev environment by performing easily achievable tasks, by voice command. As an example, let's ask Alexa to tell us the number of 30 Days of Code articles written by a specific author.



Since the Amazon overlords control the entire dev stack for this day's work, it integrates really nicely. The data flow is something like the following:

  • An Alexa skill is defined using the Amazon developer portal.
  • A user invokes an Alexa skill by voice command
  • The Alexa skill makes a call to an AWS Lambda that handles the logic
  • The Lambda function returns text and metainfo for the Alexa device to speak

Writing a Skill:

In the Amazon Developer Portal (not the AWS Portal) create a new Alexa Skill, based on the Custom model.


Invocation Name:

The skill invocation name is the name by which the user invokes the skill. In our case we want the user to "ask thirty days of code" to do something.



Intents are skill keywords tied to a specific action. For example, we want to ask about the number of articles written by an author, so we create an intent called author.


For this intent we'll need to provide sample Utterances, which are phrases used to define context for language processing.



Slots, in essence, are enums for Alexa skills. In other words, developers can define a list of grouped pre-defined values for reference within the skill, and for easy API-pass-along.


We'll need to add a Slot for Author so we can pass that value to the Lambda for later processing. It will need some predefined values, so we'll provide the names of authors to look up.



We'll need to add an AWS Lambda endpoint to process some data for us, by clicking through the endpoint tool.



Using the "Build Model" button on the Invocation tab we can prepare the Skill and move on to writing some Lambda code.

Lambdas - Adding brains to the skill:

The lambda invoked by our Alexa skill is responsible for taking the parameters from the voice query, processing them how we want, and returning text for the Alexa device to speak to the user.

Creating a Lambda

Amazon provides most of the template code we'll need to get up and running. We can harness it by creating a new Lambda function based on the alexa-skills-kit-color-expert blueprint.


After giving the function a name, permissions roles, and a trigger we can edit the nodejs code that provides our logic.

Code Additions:

First, we'll need code to call the ghost blog API and return it as JSON for use.

var https = require('https');

function httpsGet(query, callback) {
    var options = {
        host: '',
        path: '/ghost/api/v0.1/' + query,
        method: 'GET',
        port: 443

    var req = https.request(options, res => {
        var dataline = '';
        //accept incoming data asynchronously
        res.on('data', chunk => {
            dataline += chunk;

        //return the data when streaming is complete
        res.on('end', () => {
            try {
                var responseJSON = JSON.parse(dataline);
            } catch (err) {
                console.log("ERROR ==== Invalid JSON received");


We can update getWelcomeResponse and handleSessionEndRequest to match the name scheme of our app, but that's just aesthetic.

function getWelcomeResponse(callback) {
    // If we wanted to initialize the session to have some attributes we could add those here.
    const sessionAttributes = {};
    const cardTitle = 'Welcome';
    const speechOutput = 'Welcome to thirty days of code.' +
        'Try asking about the progress of an author by asking thirty days of code how many articles that author has.';
    // If the user either does not reply to the welcome message or says something that is not
    // understood, they will be prompted again with this text.
    const repromptText = '';
    const shouldEndSession = true;

        buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));

function handleSessionEndRequest(callback) {
    const cardTitle = 'Session Ended';
    const speechOutput = 'Check up on thirty days of code again';
    // Setting this to true ends the session and exits the skill.
    const shouldEndSession = true;

    callback({}, buildSpeechletResponse(cardTitle, speechOutput, null, shouldEndSession));

In the onIntent function we'll add the articles intent we defined in our skill.

function onIntent(intentRequest, session, callback) {
    console.log(`onIntent requestId=${intentRequest.requestId}, sessionId=${session.sessionId}`);

    const intent = intentRequest.intent;
    const intentName =;

    // Dispatch to your skill's intent handlers
    if (intentName === 'articles') {
        getArticlesByAuth(intent, session, callback);
    } else if (intentName === 'AMAZON.HelpIntent') {
    } else if (intentName === 'AMAZON.StopIntent' || intentName === 'AMAZON.CancelIntent') {
    } else {
        throw new Error('Invalid intent');

Finally, we write the getArticlesByAuth function that we asked the intent to do. This is where the meat and potatoes of our logic. We can leverage the Ghost Blog API v0.1 with some parameters to select articles by author slug (nickname).

unction getArticlesByAuth(intent, session, callback) {
    const repromptText = null;
    const sessionAttributes = {};
    let shouldEndSession = true;
    let speechOutput = '';

    // Use the Author slot we defined
    let author = intent.slots.Author.value;

    let ghost_cli_secret = "your client secret";    
    let query = `posts/?client_id=ghost-frontend&client_secret=${ghost_cli_secret}&include=authors&filter=authors:[${author}]`;

    httpsGet(query, (theResult) => {
        // Number of posts by the author
        let articleCount = theResult.posts.length;

        // What do we want to say when we get the data
        if (articleCount) {
            speechOutput = `${author} has ${articleCount} currently published articles.`;
            shouldEndSession = true;
        } else {
            speechOutput = `I cant find any articles by ${author}.`;

        // Setting repromptText to null signifies that we do not want to reprompt the user.
        // If the user does not respond or says something that is not understood, the session
        // will end.
            buildSpeechletResponse(, speechOutput, repromptText, shouldEndSession));



After saving the Lambda we should be ready to test on the Developer Portal.


Alternatively, if the Amazon developer account used for development was also used to set up an Alexa device then that device should immediately be able to test the new skill.

Example Code:

Complete example implementation of index.js can be found here.

Further Work:

Obviously today's example is brief and an human-achievable task. This was due to the limited length of development time imposed by the 30DoC framework. But it's easy to see (from the large market of Alexa skills) applicable uses for the platform.