My previous blog post was the proclamation of my admiration of AWS AppSync as a platform for GraphQL. In this post, I wish to proceed in my tango with the Amazon service that has given immense power to developers, especially those developing APIs. Hence, this is a demo post where I shall attempt to show the very basics of getting started with AppSync, and hopefully also provide some insight to those new with the technology. At the end of it, I hope to have some extremely basic examples through which you may be able to grasp the core understanding of AppSync and especially GraphQL. This is because, as mentioned in my previous blog, AWS AppSync and GraphQL, in general, are still novel technologies and have not yet been adopted to the same extent as current conventional data-driven application development methods.
In this demo, we shall first go over the basic structure of GraphQL and then implement the learned structures in the AWS AppSync Console. Considering the different ways that one can use AppSync, I shall also try to demonstrate some of these different methods while maintaining brevity. Amazon has never faltered in producing detailed documentation of their services, and this blog is not to be an alternative to them but rather a supplement that you can use to gain a solid insight before diving into AppSync with its myriads of capabilities.
It has been 2 weeks after Re:Invent. Last week, as I recovered from the Las Vegas-to-Turkey trip jet lag, I took some time to evaluate what we learned from the event, made some adjustments to the Thundra roadmap, and got back into the daily work routine. Now, I finally have some time to share my thoughts from the event with this recap blog.
First, the AWS team does a great job putting on the conference. Previously, I followed Re:Invent via live streams but this was my first in Vegas and I really admired the quality of work by the team to manage, educate, and entertain tens of thousands of attendees.
I was often on duty at our Thundra booth, giving demos to visitors curious about AWS Lambda, Thundra, or simply just curious. Traffic was non-stop, giving me the opportunity to validate assumptions and clarify some open questions I had gathered before coming to Re:Invent. I want to thank the +1300 visitors who stopped by our booth and shared their experiences and needs regarding serverless, monitoring or both.
As a developer, one of the most irritating things I face is that my attempts to implement good coding practices often renders my code broken and filled with bugs. It is quite the tragicomedy of code development, all in the effort to comply with the ecosystem of DevOps. The problem usually occurs when I try to configure tools I am not familiar with, only to result in my code base inevitably throwing errors or behaving unexpectedly. I thus end up barraging senior developers with Slack messages, and Google with endless searches in order to unravel the correct way of configuring these tools. Obviously if one follows the tooling docs to the letter, it should be simple, right? Wrong! More often than not, the documentation does not always address the specific use case you are trying to implement. Most typically, developers suffer through lots of easily-avoidable errors in the journey to become an expert on how to configure and implement that particular tool.
One example of a difficult tooling scenario is implementing a typical serverless monitoring tool. In order to collect even basic monitoring data, you often need to perform an excessive amount of in-code configuration. Usually, you are required to ‘wrap’ your code with the monitoring tool for monitoring to actually occur. Wrapping is the most popular way to integrate monitoring tools and collect basic data from your functions. However, this approach means you need to wrap numerous functions in order to successfully monitor an entire application.
Furthermore, in addition to the integration burden, configuring these tools can also prove tedious. It’s too complicated to expect, for example, a single simple-minded stooge like an unpaid intern, to wrap hundreds of functions and also expect him to do so successfully. No, there needs to be a more simple, less risky way of implementing monitoring for serverless applications.
Our team at Thundra realizes that this manual, labor-intensive approach simply isn’t efficient, and we’ve thus tried hard to design a monitoring tool which is easy and simple to use. Thundra offers its own serverless plug-in that allows automated wrapping to set up basic monitoring, along with automated instrumentation for more detailed monitoring. You don’t need to change your code base just to monitor your code. Finally, adding monitoring to your Lambda functions is actually easier than writing your Lambda functions. This is the way it should be!
Our previous blog showed you how our automated instrumentation (for detailed monitoring data) works, especially for Node.js. Now, let’s take a look at how automated wrapping is implemented. For now, we support automated wrapping for Node.js and Python functions.
I attended the keynote speech of Charity Majors at Serverless Computing London, where she highlighted the term observability-driven development to create highly available and resilient systems. Her speech inspired me to write about what observability-driven development is and it means for serverless.
In his GA announcement, Emrah outlined the newest features in our GA product, including rich support for Node.js applications. Here, I want to go into our support for manual and automated instrumentation and show you how to add this to your Node.js applications. But, first, let’s talk about why instrumentation is useful in serverless environments...
We believe that your data should not exist in a silo and you should be able to view, manipulate, and analyze it in the way best suited for your business, using your favorite platforms. With this goal in mind, we feel integrations with popular data platforms are an important part of Thundra’s offering to the serverless monitoring and observability space.
First, heads to all our beta customers. In order to see data in the Thundra Web Console and take advantage of our GA features, you need to update your existing agents. Currently, all beta agents send data to “beta.thundra.io”. However, this platform will be discontinued by November 1st. We recommend that you immediately update your agents to the latest versions, which will allow you to receive data in console.thundra.io.
Greetings everyone! I’ve been looking forward to this moment for a long time now. After a year of development work, I am very excited to announce that Thundra is taking flight out of beta.
If your AWS Lambda application is experiencing terrible latencies and delivering a frustrating user experience, you may target high CPU loads as the main problem to solve.
How to debug and identify parts of the code that cause a bottleneck effect in a serverless application can be a difficult question to tackle. For a regular application, you have many debugging and monitoring tools that enable you to observe your application while it is running, hence making it easier to identify any faulty and inefficient parts of your application. However, as you may already know, serverless is a brand new technology and, therefore, it lacks such tools that provide debugging and tracing capabilities. As first-hand users of AWS Lambda functions, we understand the agony that comes with not being able to trace and debug your functions, especially when there are errors or there is unexpected behavior. Considering how rapidly popularity of serverless is growing it is imperative for the programmers to have means of unimpeded, smooth development. This is where Thundra comes in, “Full Observability for AWS Lambda”, which aims to give, as the slogan suggests, full visibility of what is going on in your application to ease the programmer’s life.
Alexa is Amazon’s virtual assistant and the brain behind tens of millions of Echo devices like the Echo Show and Echo Spot. Alexa provides capabilities, (called skills), that enable customers to experience more personalized service. The Alexa Skills Store currently has more than 45,000 published skills and this number is increasing rapidly.
Whatever monitoring tool you use for AWS Lambda, privacy of the monitoring data is always a headache. It is very normal and common that monitoring data can include sensitive data or clues about sensitive data. To solve this, it is better to keep the monitoring data as a secret at your own instance(s). But, how will I visualize and extract insights from the data? Will I allocate time to query this data yourself? The queries will take too much, which fields will I index?
First, let me introduce myself: My name is Christina Wong and I joined Thundra in May 2018 as VP of Marketing. My background includes roles in mechanical engineering, sales, product marketing, and partnerships across a variety of different sized organizations (from startup to large companies) and in several different industries (automotive, defense, software). I love working in the space where business and complex technical topics intersect and I am especially fascinated by new software challenges and adoption of new application paradigms of all sorts. This includes cloud, microservices, containers, devops, and, now, serverless. In my spare time, I am a mechanic and race car driver on a team called The Cosmonaughts.
With Thundra, you can gather useful insights about your AWS Lambda function such as detecting errors and performance degradations. Thundra gives you flexibility to instrument your code either manually or by using automated approach with no code change. Now, it is time to send the monitoring data to Thundra for evaluation. Thundra gives you two options to send your monitoring data:
FAAS is a big success story. It allows us, the developers, to concentrate only on the application code. However, it’s worth noting what we often forget as we move away from monolithic service design to microservices: while we remove the complexity from the services, we add complexity to the system of services.
I remember my first times writing some code in C years ago. I was struggling to discover why my code got segmentation fault. To detect the point of failure, I was printing every single variable to console at some specific checkpoints which hopefully would tell me why. Back then, I wasn’t aware that my silly
printf s were primitive examples of manual instrumentation.
Suppose you want to buy a ticket to a concert of your favorite band and you know that the tickets are running out. You need to open your computer, wait for the launch of the browser, go to the website of ticket sales, see there is only one ticket remaining, and click to buy. Then boom, “Sold out” is the phrase you are going to hate for a while. We are sad to say but you are “cold started”. You would have been able to catch the last ticket if your browser window was open, showing the website, and ready for you to click the “buy” button. You should have kept your computer warm to buy this ticket.
We conducted our first webinar last week. After we released our public beta, we got some questions and feedback about Thundra. That’s why it was time to make this webinar, with Serkan Özal, our lead engineer, and me as product manager.
Many of the monitoring solutions available in the market aren’t built considering the nature of AWS Lambda environment. Many of these tools publish monitoring data in the request in a synchronous way which is an anti-pattern for serverless monitoring because of the following reasons:
For a very long time, monitoring tools were simple. They were mainly used as external pings.