The Chatbot Revolution and The London BotFramework Meetup Group

If you’re one of the few people who have managed to avoid the onslaught of Chat Bot related articles over the past year, then let me start by way of an introduction; a chatbot is, in it’s most basic form, a computer program that can mimic basic human conversations.

This isn’t particularly new or exciting; this sort of chat bot has been around since the 70s. What is new and exciting is the recent development in systems and frameworks which make creating your own chat bot easy enough that you can focus on the quality of the interaction with the end user instead of wallowing in the technical considerations.

There is a website with a form to fill in that will give you a chat bot at the end of it, all the way through to an enterprise company’s framework for building your bespoke conversational interface from scratch.

The term “Conversational Interface” has become significantly more popular recently as well; all those Web Designers, UI Developers, UX Developers who are taking this opportunity to reinvent classic, almost stale, concepts from e-commerce into the limited flow of an instant message are paving the way for the next iteration of how we interact with a company or brand.

We’re already more than happy to throw a tweet at a company who may have wronged us, instead of calling their helpline or emailing their customer support; and we expect, and usually receive, a personalised and rapid response. Our expectations are high, and the next generation of chatbots and conversational interfaces can help to deliver on this.

Whilst there are a brave few who have ventured forth with a chat bot – Uber, Dominos, SkyScanner, Shopify (although they’re quick to tell you it’s not a chatbot, but a conversational interface to a product catalogue, and a fine one it is at that) – the rest of us have a downpour of resources, information, documentation, APIs, EULAs, T&Cs, and no idea where to start.

This is one of the reasons I decided to start the London (LDN) BotFramework Meetup group. BotFramework is Microsoft’s emerging platform for developers to create their bespoke chatbots, and it’s this technology – along with a general conversation on chatbot futurism and various UX/R&D considerations – that will be focused on at the meetups.

We already have a good relationship with Microsoft’s BotFramework product team, and even have a Senior Technical Evangelist from Microsoft UK as the opening speaker for the inaugural session on Wednesday 26th October.

I’ve reached out to companies who have blazed that conversational interface trail and have a few already lined up to take us through in-depth case studies of both their tech and design considerations, challenges, and results.

It is my hope that the meetup group will help to mould the burgeoning future of the chatbot, via the conversations during and after the meetups, into something both tangible and wonderful.

Anyone interested in meeting other bot enthusiasts, developers, designers, or conversational interface nerds, can join the group over at the LDNBotFramework meetup.com page.

Submissions for talks are more than welcome via the LDNBotFramework meetup.com page, @LDNBotFramework on twitter, or by contacting me (Robin Osborne, the group organiser) directly: @rposbo



LUIS Natural Language Service for BotFramework

Creating a hosted bot using Microsoft’s botframework couldn’t be easier; hopefully you’ve had a chance to create one already, and if not there’s a great introduction to creating your first bot right here.

In the previous article we saw how to create a QnA (aka FAQ/Knowledge Base) service using a little known QnA Maker service of the botframework.

In this post we’ll start to create a more intelligent bot; one which can appear to understand the intent of the incoming message and extract specific key variables from it.

Understanding the intent of a piece of text is a really tricky problem to solve; totally out of scope for this article, and for most bot projects! Luckily, the botframework has a friend called LUIS – the Language Understanding Intelligence Service.

LUIS exists as a standalone product, and can be called with a query (text input) to give a result (matching “intent” with a “confidence” score, as well as any extracted “entities” – we’ll get on to these later).

The botframework can support LUIS such that the intent response from LUIS can map directly to a class with a matching attribute value, such as


It can even map down to a specific method within that class using another attribute:


Given this magic, it’s no longer obvious what the entry points are for your application which can make developing and debugging the logic within your bot application really tricky..

We’ll get on to the C# and botframework magic later, but first let’s look at getting started with LUIS and how to use it.

Setting up LUIS

Head over to Luis.ai and log in with your Microsoft account. After a brief intro wizard thingy, you’ll see a screen like this

luis inital page

Here is where we can define our apps. Create a new app by clicking either “New App” -> “New Application” or “Build a new language understanding application” – both open the same overlay:

luis create app popup

  • Give it a name and choose from IoT, Bot, Mobile Application, or Other (honestly have no idea if this info changes anything; AFAIK it’s just data gathering); I just choose “Bot”.
  • Choose your “Application domains” – again, this doesn’t alter your experience, so just go with whatever you prefer; I chose “Entertainment”.
  • “Application description”? “rposbo’s demo luis bot thingy” is all I put 🙂
  • “Culture” might actually be used, but I’ll get on to that in another article.

Now that you’ve got an empty app, you need to set up some intents. You should see a screen that looks a bit like this:

created app

Click the plus sign next to INTENT to create a new intent; in terms of the botframework we can just think of an intent as a category which we’re going to teach LUIS to map sentences to, and extract meaningful terms from.

create intent

Give it a name – for example “local.news” – and enter an example “utterance”:

“Utterances” are the short sentences which are used to map the user’s input to an intent. You can define many different ways of saying (uttering) something which you intend to mean the same thing.

For my “local.news” example, I’m going to use the utterance “what’s happening in London today?”

Now I’ve got an intent name – “local. news” – and an example utterance – “what’s happening in London today?” – I can hit “Save” to create this intent.

I’m going to create one more intent – “local.weather” – and seed it with the utterance “what’s the weather like in London today?”

(Notice these utterances have some variables in them – “London” and “today” – which we’ll come back to later on)

The LUIS web service

We’re nearly ready to give this very basic logic an endpoint; to do this we have to TRAIN and PUBLISH the LUIS app. Every time you want to update your LUIS app – if you’ve added more utterances or intents for example – you always need to TRAIN and PUBLISH again.

Firstly, hit TRAIN in the bottom left and wait for the spinny loader thing to finish; the more utterances you have, the longer this process takes.

train fail

Uh oh. You do need to define a minimum of two utterances per intent. Add a couple more and hit TRAIN again, then once that’s done hit PUBLISH to get this overlay:

app publish pop-up

Now hit “publish web service” to create or update your LUIS app’s web service.

app published

Testing LUIS

Now that we’ve got a web service, let’s try it out! In that same overlay you should see a URL for testing and a query input box. If I type in “will it be sunny in Ipswich tomorrow?” and hit TEST, then I should see a new tab open with a json response that looks like this:

json response from LUIS endpoint

Notice the confidence scores are really low – they’re on a scale of 0 to 1. Basically, LUIS was not able to match that input to either intent with any degree of confidence, so the “None” intent got the highest score (0.28 – just above “local.weather” with 0.26, which is promising).

We can easily fix this by submitting the exact phrase into the LUIS app homepage and selecting our prefered intent from the dropdown:


After doing so that same sentence will now match with a confidence of “1” – i.e., super premo number one confidence!


Remember what I said earlier? Before any training changes (i.e., new or recategorised utterances) can take effect, you need to TRAIN and PUBLISH, otherwise the query result won’t change. So hit TRAIN then PUBLISH and “Update Published Application”. Let’s refresh that web service call again:

retrained republished

(Not quite “1”, but it rounds up!)


We’ve now got a LUIS app with two intents, each with two mapped utterances, and a web service endpoint to send text to and see if LUIS can map that text to an intent.

Training LUIS

Hopping back to the LUIS app’s home screen, you’re presented with a text box where you can enter more utterances to see how well LUIS maps them to intents. You can then override or set the correct intent, hitting “Submit” each time, to help teach LUIS.

Once you’ve done this for a few more utterances, hit TRAIN and PUBLISH to update the web service endpoint.

This is obviously a pretty manual process. If you already have a good set of input data to hand (for example, extracted from chat logs elsewhere) then this method of training your LUIS app is very inefficient.

Luckily there’s the option to upload a text file of utterances directly! Hurrah! This must be in the form of a text file containing one utterance per line, each utterance being unconnected to any intent.

On your LUIS application list page (not the LUIS app homepage itself), hit “import utterances” for the app you want to update (in case you have more than one)

upload text file of utterances

You’ll then be presented with this:

import 'unlabled' utterances popup

Once the file has been uploaded, LUIS will attempt to map each one to the existing intents.

utterances imported

It takes a bit of clicking around to find these uploaded utterances; you need to head to the SUGGEST tab, where it will gradually show you the uploaded entries along with their mapped utterances and their confidence score.

It’s here that you can recategorise or confirm the terms one by one; each time you submit one the model is retraining and the subsequent utterances have higher confidence scores.

Remember to TRAIN and PUBLISH when you’re done, then try querying the endpoint again with some alternative input; if the intent match is wrong or has a really low score, you can enter the same input on the LUIS app’s home screen – NEW UTTERANCES – to retrain to the correct intent.


I mentioned variables in the input earlier, so let’s have a look at mapping and extracting those next.

Head back to your LUIS app’s homepage and hit the plus sign next to ENTITIES. That will open the entity overlay where you can create a new entity.

add new entity popup

Give your entity a name and hit “Ok”; since my utterances are like “what’s the weather like in London today?” I’m going to create a “where” and a “when” entity.

LUIS already has many built-in entity types which cover location and dates; I’m just creating these for demonstration purposes

Now we can go back to our utterances and improve them by highlighting the entities; go to “Review Labels” and tap “Show all labelled utterances” to start enhancing them.

Choose an utterance and click to highlight the entity, then select the entity type from the pop-up and hit “submit” to confirm

select entity in utterance

Do this a few times and hit TRAIN and PUBLISH when you’re done.

Now try searching for a new utterance, one that you haven’t explicitly defined the entities for; you should see LUIS has identified the entities for you! If not, highlight and set them yourself.

Submit this new utterance to help improve your LUIS model (…and TRAIN and PUBLISH again..)


You should now be able to query LUIS via the service url and receive a json response with the most relevant intent and also with the entities identified:

json response with entities

You could now call this endpoint directly – using a webclient for example – and parse the response to determine the code to execute and the variables (entities) that were passed in.

You can also use the built-in support for LUIS in the Botframework.

We’re going to look at both of these options in the next article!


Botframework Data URI images

A quick botframework tip – you can include images in your message attachments by Data URI, not just by URL!

For example, constructing a message like this:

var reply = message.CreateReply("Here's a **datauri image attachment**");
reply.Attachments = new List<Attachment> {
    new Attachment()
        ContentType = "image/jpg",
        Name = "datauri"

Gives a response that looks like this:

botframework image data uri

The image is very blurry because the emulator forces all images to fit a predefined width; the image I’ve used is only 16×16, so has been massively stretched!

You can use a much larger image – the previous one is around 500 bytes and this next one is nearer 20kb:

botframework image data uri big

Pretty cool, huh? Make sure the Data URI you’re using is valid and correct, otherwise either the botframework will return an internal server error, or the emulator will break.

I’ve got this working with images over 140kb, so it’s certainly not limited by ye olde IE8’s 32kb limitation.

It is also supported across Skype desktop, Skype web, and Skype for Android:

datauri on skypes

Getting a Data URI

The easiest way to make sure you’re getting a valid Data URI is via the amazing Chrome devtools: visit a page with the image you want – in my example I’m using my twitter page – and open devtools (F12/ctrl+shift+I/cmd+shift+I/via the menu):

chrome dev tools - copy image as data uri

Open the Sources tab, then find the picture you want within the tree structure; once you’ve found it, right click and select “Copy image as Data URI”! Amazing. Easy. You can then paste this directly into the browser address bar to see the image rendered, then paste this in to your bot code, as above.

Have fun!


Enable AMP Analytics using a custom WordPress plugin

I’ve recently enabled support for Facebook Instant Articles, Google AMP, and Apple News on this blog following this enlightening article.

It wasn’t exactly plain sailing; AMP needs a logo, and a featured image set for every article (both of which must be above a minimum size); the Instant Articles plugin has a feed url which my feedburner plugin breaks; Apple News needs some real tweaking, but still strips out code blocks from posts.

However, I also wanted to get my AMP plugin hooked up with my Google Analytics tracking. Luckily, the plugin from Automattic has support for this, but I needed to implement the analytics configuration using a custom theme or a custom plugin.

I’m not a PHP developer, let alone a WordPress developer, so this doesn’t come naturally to me! Please bear with me..

Following the instructions on the Automattic AMP plugin github page, I decided to create a custom plugin in my /wp-content/plugins directory; I created a new dir – amp-analytics – and put the following in a amp-analytics.php file:

Plugin Name: amp-analytics
Plugin URI:  https://robinosborne.co.uk/
Description: adding analytics into amp
Version:     20160927
Author:      Robin Osborne
Author URI:  https://robinosborne.co.uk/
License:     GPL2
License URI: https://www.gnu.org/licenses/gpl-2.0.html
Text Domain: wporg
Domain Path: /languages


add_filter( 'amp_post_template_analytics', 'xyz_amp_add_custom_analytics' );
function xyz_amp_add_custom_analytics( $analytics ) {
    if ( ! is_array( $analytics ) ) {
        $analytics = array();

    // https://developers.google.com/analytics/devguides/collection/amp-analytics/
    $analytics['xyz-googleanalytics'] = array(
        'type' => 'googleanalytics',
        'attributes' => array(
            // 'data-credentials' => 'include',
        'config_data' => array(
            'vars' => array(
                'account' => "UA-000000-0"
            'triggers' => array(
                'trackPageview' => array(
                    'on' => 'visible',
                    'request' => 'pageview',

    return $analytics;


This resulted in a new a plugin appearing in my WordPress dashboard:

my custom amp-analytics plugin appearing

Once activated, I visited an “amp” version of my most recent article and viewed the source to find the GA content nicely rendered for me:

amp analytics in the page source

Hurrah! WordPress rocks!


Create your first QnA bot using botframework’s QnA Maker

When talking about the botframework, and chatbots in general, people usually assume that these are all using some clever logic and Natural Language Processing (NLP) to deliver a chunk of business logic via a natural language interface.

With the botframework this is most likely implemented by wiring up the Language Understanding Intelligent Service (LUIS): originally a stand-alone, (optionally) self-training, natural language understanding service, but now part of Microsoft Research’s Cognitive Services – previously “Project Oxford” – a collection of extremely powerful machine learning APIs for processing images, video, text, speech, to extract meaning.

Exceptionally powerful, incredibly clever stuff.

Almost all botframework articles and tutorials you’ll see at the moment will either do very basic pattern matching to extract intent from a message, or they’ll use LUIS (or a combination of the two); how to use LUIS is the subject of another article entirely, since this is no small task (I’ll come back to this in another article).

FAQ bot

There’s also another option; one that’s not well documented, nor widely advertised, but is extremely useful for both getting a very useful chat bot up and running, and also enhancing an existing chat bot by plugging gaps in it’s “conversational knowledge”: the QnA Service.

Using this service you can very easily create your own QnA/FAQ/Knowledge Base bot; the beauty of this type of botframework service is that it requires no coding at all; you simply “seed” your “QnA” bot with predefined content – Questions and Answers – and the QnA Maker service does the rest.

The result is an endpoint which takes a query and returns a json response containing the matching content, if any, and the degree of confidence for that match; you can also embed your QnA bot directly into a web page using the hosted html endpoint.

What’s extremely powerful is how it (presumably) uses LUIS under the hood; since you don’t need to ask the exact question you seeded it with to get the relevant answer, there must be fuzzy logic/pattern matching/NLP going on. You can also “train” your QnA bot with multiple phrasings of the same query for a given response.

Possible Implementations

This is an extremely cheap and easy way of creating your company’s first chat bot; all you need is some seed data in one of many possible formats, and you can even embed the web hosted bot if you don’t want to create your own using your QnA bot’s API endpoint (“service URL”).

If you have more resource (time, knowledge, people), then creating a javascript bot client to call the service URL would enable you to create a bespoke client.

If you’re already using LUIS and the botframework, then you can have your “None” intent (i.e., where LUIS hasn’t matched the incoming message with a high enough confidence score) call the service URL with the same query to see if there’s a relevant FAQ result to display instead, thus plugging a gap in your LUIS bot’s conversational ability.

For example, if you’ve created a chatbot for booking a holiday, and can match all queries about locations and flights and hotels, this FAQ bot could handle questions about the refund policy, or holiday insurance, that obviously wouldn’t be needed by the main logic of the bot.

Create your first QnA bot

Head over to qnamaker.botframework.com and log in with your Microsoft Live account. You’ll initially be redirected to dev.botframework.com before being whisked off to the QnA maker page:

botframework QnA maker - Create new QnA service

Start by clicking that “Create new QnA service” and you’ll be taken to a screen containing a few steps, most of which are “either/or”, but that’s not obvious initially.

Name it

The first item is naming your service (mandatory):

Naming your QnA bot

This is the name that will appear against each response within the hosted web page version of the QnA bot; other than that, it’s not really used.

Seed data

You can now choose from a combination of one or more of the following methods of seeding you QnA bot:


You can choose to pass in one or more URLs to existing, publicly accessible, FAQ style pages; so long as they have a reasonably consistent “FAQ” type HTML structure, then the QnA Maker service can extract the Qs and As – this is extremely cool, and I’ll get on to it in my 2nd example:

FAQ URL entry for seed data

I’ve tried a few help pages with different HTML structure, and it has been pretty reliable in extracting the data so far.

b) Textbox Q and A pairs

We can enter our Q and A pairs directly into a textarea on the page in the format question:answer, with one pair on each new line – I’ll use this in the first example:
Text entry for seed data

c) Upload a file

Again, this is very cool in that it allows you to take preexisting content, and just import it directly; you may have a FAQ doc or PDF that is used internally, or perhaps was used as the “requirements” for the “help” section of your site.

You could also programatically generate a tsv (tab separated values) file and upload that:

file upload for seed data

QnA Bot v1 – Seed data via text entry

For the first example, I’m going to go with just entering some text in the textarea of section 3:

Who was the plumber always fighting Bowser?:Mario
Which was the first Zelda game on the SNES?:A Link To The Past
Who was the final boss in Street Fighter 2?:M. Bison
Which hedgehog collected rings?:Sonic
Who fought Metroids?:Samus Aran

(And yes, I’m aware that the Japanese version of SF2 named the end bad guys differently, since the boxer – “Balrog” in the non-Japanese version – was meant to be called “Mike Bison”, and Capcom wanted to avoid a lawsuit so shuffled the names around.. but anyway…)

Extract data and Train

Now you’ll have the options to “Cancel” or “Extract”; you have to “Extract” before you can click “Next”.

As such, go ahead and click “Extract”; depending on the data source and data size, this can take a little while. As far as I can tell, this is importing the data, identifying “utterances” for each Question and creating something equivalent to LUIS models for each Answer, then training LUIS.

Once you see the little bit of blue text that says “5 Question-Answer pairs created” you can click the “Next” button to continue…


Now you’ll have a chance to try our new QnA bot out with the embeded web chat.

Testing your QnA bot

If there are multiple possible responses for a question, the QnA service will return the answer with the higher score.

However, in this mode we can see all the potential matches on the left of the chat, and can override the given response, retraining the bot as we go!

On the right of the chat window we can enter alternative phrasing for the same question, to allow the underlying logic (presumably LUIS) to be able to match

Training your QnA bot

There’s even the option to upload a new file here (as you could in stage 4 on the previous screen), so you can keep tweaking the accuracy of the language matching logic.

If you do enter any new info, click “Re-Train” in order to have it apply.

retraining popup

Once you’re happy with your bot, click “Publish” to finalise the process and be given the “Service URL”.

published bot!

Using your QnA bot via the Service URL

The service url is the same endpoint for all QnA bots, just with a different “kbId”. You can just put this in a browser and append your question to get the matching answer along with the score of confidence for that match:

json endpoint response

(That url suggests this service used to be called a “knowledge base” service instead of a “q and a” service, eh?)

Using your QnA bot via a web page

Notice the text that says “You can also chat with rposbo_bot_trivia here” – tap that link and you’ll get your bot in a full browser web page:

webpage hosted bot

Pretty cool stuff, huh?

QnA bot v2 – Seed data via URL

Let’s try that again, but try using a URL to fill the bot’s brain. Head to your qa bot homepage and create a new bot. Give it a name, and this time let’s paste in the URL of a website with a FAQ page.

I’ve used the UCAS FAQ page – and I’ve also tried this with the Thread.com help page, which is a completely different structure but still loads perfectly fine.

The UCAS FAQ page looks like this:


And an expanded question looks like this:

UCAS FAQ example answer

If you paste the URL https://www.ucas.com/ucas/undergraduate/apply-and-track/frequently-asked-questions into section 2 of the bot creation page, then hit “Extract”, you should see the notification that “14 Question-Answer pairs created”.

Now let’s test it!

testing and training the UCAS FAQ bot

Notice that the question I asked here – “what’s the application process” – isn’t the same as the text on the FAQ page – “What will happen when I’ve sent my application?” – and it appears to match several answers; notice the blue boxes on the left.

Let’s try that again with a different question, and use the service URL instead:

UCAS json endpoint response

Only one response returned, but it’s pretty low confidence score.

Now here it is using the hosted web page version:

UCAS hosted webpage

Again, notice the questions I’ve asked are different between these two tests, and neither are the same as the actual question from the website, yet both match back to the correct answer.


This is obviously a really powerful and useful chat bot tool, but it’s seriously hidden away in the bot framework portal, lacking in-depth documentation.

You can have your own FAQ bot up and running in just a few minutes.

Hopefully you found this useful, and if you successfully create a QnA bot, please let me know how it goes!


Debugging BotFramework locally using ngrok

No doubt you’re already having lovely long conversations with your bot via Skype (or Facebook Messenger, or even SMS!) built using the botframework, and by using the Bot Emulator you can run your bot locally and debug it.

However, once it’s deployed and is being called via the Bot Connector framework, instead of directly, things get a bit tougher.

If you haven’t managed to override the – rather nasty – default exception handling that swallows exceptions and spews out reams of useless stack trace, then you may not have much idea what’s going on with your deployed bot, since all you get back is “Sorry, my bot code is having a problem.”

When you encounter a strange problem whilst conversing with your bot in Skype, going through the process of adding loads of logging and redeploying, trying again, checking logs – just to see the journey your bot code is going through – isn’t the most efficient.

If only you could debug the code on your development PC just as easily as you could before the bot was deployed, locally in Visual Studio…

In this article I’m going to show you how to debug your bot code from Skype though to your local PC’s Visual Studio instance, thanks to the amazing ngrok!

Continue reading


Summing CSV data with Powershell

As I’ve mentioned previously, I tend to use Powershell for all manner of random things. This time around I wanted to use it to avoid having to upload a csv to google drive or opening up my other laptop that has Excel on it, just to get some numbers.

I’m self-employed, so I have to regularly do my personal tax return. My – extremely inefficient – process involves leaving it until the last minute, then trawling through my online bank account and downloading each month’s statement in csv format and digging through these to find the numbers I need to fill out the various documents.

Naturally, I’d prefer to do this automatically, ideally scripted. So I did!

Let’s assume you’ve got a CSV file called September.csv that you want to play with, perhaps with content a bit like this:

2016/09/13,Black Coffee,3
2016/09/11,Double Espresso,2

Let’s read the csv file in as text:

Get-Content .\September.csv

Now let’s convert the content into a Powershell object by piping the file contents into another cmdlet:

Get-Content .\September.csv | ConvertFrom-Csv

Here’s where we can use the Measure-Object (or just measure) cmdlet to pull out various useful stats, such as Sum, Average, Maximum, and Minimum for a given property – which in this case maps to the “Amount” column in my csv file:

Get-Content .\September.csv | `
ConvertFrom-Csv | `
Measure-Object "Amount" -Average -Sum -Maximum -Minimum

This gives us the results:

Count    : 4
Average  : 3.675
Sum      : 14.7
Maximum  : 8
Minimum  : 1.7
Property : Amount


How about you want to filter that a bit? I want to know just how much I’ve spent on espressos, so I’m going to use Where-Object (where for short):

Get-Content .\September.csv | `
ConvertFrom-Csv | Where-Object {$_.Description.Contains("Espresso")} | `
Measure-Object "Amount" -Average -Sum -Maximum -Minimum

Which results in:

Count    : 2
Average  : 1.85
Sum      : 3.7
Maximum  : 2
Minimum  : 1.7
Property : Amount

Handy, huh? Another little one-liner to get you through the day.

botframework dialog context set privateconversationdata

Persisting data within a conversation with botframework’s dialogs

In the previous botframework article I covered the different types of responses available for the botframework. This article is going to touch on the Dialog and persisting information between subsequent messages.

So what’s a Dialog?

Dialogs can call child dialogs or send messages to a user. Dialogs are suspended when waiting for a message from the user to the bot. Dialogs are resumed when the bot receives a message from the user.

To create a Dialog, you must implement the IDialog<T> interface and make your dialog class serializable, something like this:

public class HelloDialog : IDialog<object>
    protected int count = 1;

    public async Task StartAsync(IDialogContext context)
        await context.PostAsync($"You're new!");

    public async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> argument)
        var message = await argument;
        await context.PostAsync($"I've seen you {count++} times now, {message.From.Name}");

What I’m trying to show here is how the dialog handles an initial message from a new user, versus the continuation of a conversation with that same user.

Continue reading


Rate-Limiting, Powershell, Pester, and the ZenDesk API

Have you ever had to utterly hammer an API via a little Powershell script, only to find you’re getting rate limited, and lose all that previously downloaded data before you could persist it somewhere?

I have. I’ve recently put together a script to query a customer support ticketing system’s API, get a list of all tickets within a given time period, then query again to get to get the full content for each ticket.

All of this data is being added to a custom Powershell object (as opposed to churning out to a file as I go) since I convert it all to Json right at the end.

I’d rather not get half way through a few thousand calls and then have the process fail due to being throttled, so I’ve chosen to create a little function that will check for – in my case – an HTTP 429 response (“Too Many Requests”), get the value of the “Retry-After” header, then wait that many seconds before trying again.

This particular implementation is all quite specific to the ZenDesk API, but could easily be adapted to other APIs which rate limit/throttle and return the appropriate headers.

I’ve called the function CalmlyCall as a nod to Twitter’s original throttling implementation when they used HTTP 420 – “Enhance Your Calm” – back in the day.

Continue reading

dev portal skype configured

Botframework Web Chat Embedding

The past two articles have been about setting up a botframework bot in Skype and comparing the various Skype clients to see how it’s rendered. In this short article we’ll have a look at configuring the Web Chat client.

Web Chat client is enabled by default when you register your bot:

enabled clients

If you click “edit” next to Web Chat, you’re taken to the configuration page:

configure within botframework dev portal

Click “Regenerate Web Chat secret” and those textboxes will be populated

regenerate web chat secret

You can now embed the web chat client on your webpage using an iframe

<iframe src='https://webchat.botframework.com/embed/rposbo_demo_bot?s=YOUR_SECRET_HERE'></iframe>

Which looks like this – have a play!