When you’re testing an API, or really any kind of software, functional testing and performance testing are available. These two types of tests are not an either/or-choice, but adequate software quality assurance needs them both to cover different surface areas and requirements. My latest article for my client BlazeMeter is called “Functional Testing vs. Performance Testing and the Value of Using Both”. After comparing API testing and API monitoring in general, I took a deeper dive into testing now. I’m always happy to get feedback on my articles and I’m open to new technical writing contracts as well - please check out my Writing page.
Disclosure: This work was paid for by BlazeMeter.
Another API the Docs virtual meetup took place last night, the third one of the second season. Michael Meng, Lorna Mitchell, and Patrick Hammond shared their thoughts on API documentation. As for the first and second meetup in the season, I’ll provide a little personal recap.
Michael Meng, a professor at Merseburg university, talked about research-backed ways to improve API documentation. The key questions are, which information should you present, and how? Previous research indicated different strategies in the way developers access documentation. There is the opportunistic, or exploratory, strategy, and the systematic strategy where developers try to gain a broad understanding first (a pragmatic approach combines both). In general, they access documentation in a task-oriented manner, focused on code examples (which backs up nicely what Milecia McGregor said about code samples at a previous meetup). It’s also vital to explain underlying concepts, but best integrate them into use cases as developers often don’t read separate concept docs. Michael’s team created guidelines for enabling efficient access and facilitating initial entry into the API and then rewrote documentation for an existing API based on those guidelines. They invited 22 developers and asked them to complete a task. Half of the group got the original documentation. The other half got the optimized version. What I found interesting is that the group who got the optimized documentation performed more accurately, but only slightly faster. However, they did spend more time in the documentation, which indicates that good documentation motivates developers to understand first and then act where they would otherwise spend time in trial-and-error experimentation. Michael was careful about his study’s replicability and said there needs to be more research to disentangle different optimizations. The paper is available as open-access research.
Lorna Mitchell started her talk by quoting the Postman State of the API report, which maintains that the top issue with APIs is documentation, just as the research Michael mentioned. She said: “It’s not an API if it doesn’t have (reference) documentation.” I also liked how she compared various improvements to API documentation with a restaurant menu. Even though a customer doesn’t order everything at once, every menu item improves the whole thing. She mentioned a list of those improvements, with “Getting Started Content” being the first, reminding us that “every API is someone’s first”. Hence, you shouldn’t be afraid to go overboard in the documentation and make sure to link additional resources. It seems obvious, but also Adam DuVander said many companies don’t have a “Getting Started” guide. Examples are “worth many words”, and OpenAPI helps. Lorna also explained their docs-as-code setup where they use separate standalone files included in the docs through custom wiring (again, the approach that Milecia suggested as well). Their code snippets are parts of longer files with full examples stored on GitHub.
Lorna mentioned another type of content: “HowTo: X with Y” guides, where X is a common task, and Y is a tech stack. They can be blog-type content but also part of documentation and should be end-to-end guides. If developers use them and some read nothing else, they should be fine. Along with those go demo apps, where you should mix up different tech stacks. I was happy she mentioned these two types of content as I enjoy creating those for my clients. Finally, she included troubleshooting guides (which people can find searching for error messages) and recommendations of related tools (that the company uses internally but can also help API consumers). One of these tools is Postman, and Lorna added that releasing Postman collections provides additional value, even if Postman can import OpenAPI. Also, SDKs are helpful, even if they are just a thin layer over the API. However, only ship what you can support! Lorna recently wrote an article on SDKs. Also, check out my summary of an interview with Lorna if you’re looking for more nuggets of API-related wisdom from her.
The third talk was by Patrick Hammond, who explained the docs-as-code/DocOps approach at Adyen. Sadly I had to leave halfway through the presentation for another meeting, so, unfortunately, I can’t provide you a summary this time.
API testing and API monitoring are vital parts of a successful API program, just like API design and API documentation. However, what is the difference between testing and monitoring, considering some overlap in approach and tooling? My take on this is that testing is primarily a part of building APIs, whereas monitoring is mainly about running APIs in production. I published the extended version of my comparison as a new article on the BlazeMeter blog, called “API Testing and API Monitoring: The Complete Guide”. I’m happy to continue my previous partnership with their company to explore and write about API life cycle topics. You can find additional articles I wrote for them and other clients on my Writing page, and I’m open to new writing contracts as well.
Disclosure: This work was paid for by BlazeMeter.
Releasing an API without any authentication is often a bad idea, but managing users and API keys is too much effort for smaller APIs, for example, for side projects. However, CloudObjects has a method to distribute shared secrets between domains. You can leverage it for API authentication. How? I have written a tutorial-style article where I explain shared secrets and give a practical example of how you can implement them in an API. You can read my post on adding lightweight authentication to your API with CloudObjects shared secrets on the CloudObjects blog.
The second episode of the second season of the API the Docs virtual meetup series was all about API Design First, a topic that’s very dear to me. The two speakers of the evening, Ivana Isadora Devcic and Jeremy Glassenberg, approached the subject from different angles.
Ivana, a translator and technical writer who recently joined Redocly, talked about advocating for the API Design First approach. She reiterated that it helps with collaboration, getting early feedback, and automation, emphasizing that it unblocks technical writers who can get involved sooner in the process. The extraordinary thing about her talk is that she framed the API advocate as the hero of a role-playing-game and followed that theme in her slides, providing different items that help the hero fulfill their quest. API design is a business problem, not a documentation problem. You can sell it to management using “magic words” while educating your peers by showing them available tools. You should take the role of a leader but empathize with those who don’t immediately jump on the bandwagon. Ivana said that things can still go wrong, for example, due to bad timing and communication, citing her previous company as an example, which is a refreshingly honest approach. It’s good that we are becoming more able to talk about failure, as seen in events like “fuck up nights”.
Jeremy took the Silicon Valley perspective and started by talking about investors. The VC (venture capital) community is becoming increasingly interested in companies having APIs and those building the tools around the API lifecycle. Next, Jeremy talked about the history of APIs. A decade ago, REST started to take over from SOAP, but tooling was terrible, so APIs weren’t that good either. There was WADL, but many developers didn’t like XML. APIs became better and more popular as JSON replaced XML, and the first version of Swagger (eventually becoming OpenAPI) entered the stage, enabling a platform for additional tooling. The first tools focused on autogenerating APIs, causing developers to ship their database schema as an API. However, that wasn’t enough to convince people to use these APIs, so we had to think of APIs as products, and the role of the API product manager emerged. Jeremy showed various types of tools, making the distinction between those that generate OpenAPI (like IDEs) and those that use it, either for building (API gateways and server stub generation) or for developer experience (API documentation, developer portals, SDKs and mocks). We should think of these tools as a tech stack. He ended his talk by hinting at future opportunities, including CRUD endpoint autogeneration from models, enhanced visual IDEs, app marketplace frameworks, and more sophisticated back-end generation. Another exciting concept he mentioned was the integration of API tools (especially monitoring) with customer support, evolving CRM (customer relationship management) into DRM (developer relationship management).
Altogether, both talks have mentioned excellent cases of APIs’ potential when we apply API Design First and build automation, a point that Josh Ponelat and I also try to make in our book “Designing APIs with Swagger and OpenAPI”.
You can also read my recap of the first API the Docs event of the second season.
“System change, not climate change” is one of the banners you can typically see at a Fridays for Future strike. I’m not a fan of that slogan, because while “not climate change” is something we should all agree upon, there’s no further definition of what “system change” entails. It leaves room for interpretation. What is the “system”? Do these young people want to abolish democracy to establish a “climate dictatorship”, as some right-wingers claim?
There’s another banner I’ve seen at climate protests: “Burn capitalism, not coal.” Again, I think we can agree on “not coal”, but why “Burn capitalism”? And probably capitalism is what they meant by the word “system” on their previous banner.
Of course, I’m fully aware that slogans have to be short and to the point. However, it reminds me of something that has been on my mind quite often lately. It is a problem for the discussions in society as we talk about ways to avert the climate crisis and effectively solve other issues, for example, around social justice.
Friday night, Luisa Neubauer, one of the Fridays for Future movement’s leaders and public figures, was on German public television in the ZDF show Aspekte. Being asked about system change, she said that we have to get away from “the dichotomy of capitalism and socialism”. That’s precisely what I’ve been thinking, too! Instead, she argued, we have to design an economic system compatible with the Paris agreement’s climate targets, but it’s not her expertise to describe or even name that system. It’s also noteworthy that she never said anything against capitalism per se but always qualified it as “fossil capitalism” or “this capitalism”, which is a good start.
Many people speaking out in favor of capitalism often argue that we have a choice between either what we have right now (or an even more neoliberal version of it) or the failure that was Soviet-style state socialism. Since nobody wants the latter, we can’t mess around with the system.
On the other hand, people use capitalism as a one-word explanation for everything they deem unfair or wrong in society, a system designed around exploiting people and the planet for the sole benefit of a few rich people.
I don’t want to go too deep into it right now, but if you look at studies and statistics around human progress, the world tends to get better. For example, we have improved health and education and reduced poverty around the world. In most metrics except for CO2 levels in the atmosphere, things look optimistic, but the latter could ruin everything else. I’d say it’s tough to argue that the positive developments have happened despite instead of because of our capitalist economy. I personally believe in markets and the power of entrepreneurship as a motor for innovation. However, I don’t believe in unlimited growth, monopolies, and extreme inequality. We have to adapt our economy to reap the benefits and mitigate the downsides of capitalism, for example, through regulation and redefining our goals for success outside of GDP growth.
Just like Luisa, I don’t claim I have the solution. I’m not a subject matter expert, only a curious software developer and IT entrepreneur cosplaying as a public intellectual. However, the one thing I’m confident about is that we won’t redesign our economy for the 21st century if we continue arguing between capitalism and socialism.
Milecia talked about why technical documentation for APIs and developer products matters. She emphasized that code snippets in documents are of utmost importance because many developers are primarily looking for these. They don’t want to read all prose in detail and copy and paste examples instead. Hence, you lose some of your developers if the snippets don’t work, and if they have lousy quality, it reflects on your overall product. Milecia had an interesting suggestion to assure the quality of your code snippets. First, you separate them from your docs. If you’re using docs-as-code, don’t use inline code blocks but add references instead. She showed an example in Gatsy, which supports including the samples. Once you separate them into individual code files, you can write unit tests and run them as part of your CI pipeline.
Louis talked about audience segmentation for developer portals, emphasizing that “developers” is too broad of a term to describe your audience accurately. He suggested creating various personas, which can industry-specific. Create a one-pager defining each persona, even including information that doesn’t seem immediately relevant, to make them good fictional characters. You can brainstorm with your team in creating the personas and, once done, distribute them internally and use them whenever you create content. Louis showed a practical example, including one developer persona that reflected the exact copy-paster-type that Milecia described in her talk.
Both speakers, but Milecia in particular, made the point that I also make a lot, that developer content, and especially technical developer content with code appropriate for your audience, is vital to run a successful API program developer portal. It’s an excellent opportunity to remind you that I also offer consulting, development, and technical writing services to assist companies with this kind of content. Please contact me to learn more.
Also, if you’re interested in future API the Docs events, you can join them for free, thanks to the sponsors, but you have to sign up here.
Before the pandemic started, I used to run board game meetups frequently. One of the highlights of those gaming nights was a party game called “Werewolves”, which some of you may also know as “Mafia”. I often led sessions as a narrator. When Anna Gát of The Interintellect wanted to run a virtual gaming night on Zoom, I quickly volunteered for that. I mostly based my approach towards implementing a virtual game on Anjuan Simmons’ “How to Play Werewolf Over Zoom” guide, but a day before it, I decided I had to add some code, API, and no-code stuff to it. Here’s a quick high-level overview of how I did it.
First, I created a new base in Airtable and a table for the game. Inside the table, every player gets a row. I added several columns for the characters, elimination status, and free-text fields for notes that I could take during the game to assist with the narration (for example, whether the Witch has used her potions). With Airtable’s grouping feature, I arranged the view to list villagers and werewolves separately and hide eliminated players to remove clutter. Another benefit of Airtable is that after the game, I could share a private link to the table so players can see what happened.
Then, I wrote some custom PHP code as a phpMAE class for two purposes. One is to assign characters randomly. The function takes a list of role cards, fills up the deck’s remainder with regular villagers, shuffles it, and updates every row in Airtable with the card’s character. While I did this with PHP due to personal preference (and to “selfdogfood” phpMAE), I think this is also a great use case to build with the new Airtable Apps they announced lately.
The second purpose is player registration and role reveal. Using phpMAE and Twig templates, I created a minimalist HTML website for players to enter their names. The PHP code calls Airtable’s API and adds a row for the player, returning the ID for the created record and a secure hash to the frontend. After I announce that I have dealt the cards, players can click a button on the website. If the hash is valid, it fetches their player record from Airtable and displays their role without revealing additional information. You can see the source code for both here, but it partly relies on undocumented or invite-only phpMAE and CloudObjects features. Contact me if you’re interested in those.
Most interactions throughout the game occur on Zoom voice chat, and communication with particular roles during the night uses private messages. For the werewolves, I created a separate group chat to interact in ways that Zoom doesn’t allow. For this game, we used Interintellect’s Mattermost instance, but any group chat tool works. When it comes to voting at the end of the day phase, I asked everyone to paste their decision in the public chat and hit send at once. However, counting the votes was the most tedious task of the narration, so if I add any additional gameplay features to my custom app, it would probably be voting.
It was not just the first time I narrated a werewolves game over Zoom but also the first time I did so in English (instead of my native language, German). Therefore, I also spent some preparation time writing a script with how and when I wanted to explain everything. Its primary purpose was not to forget to call anyone. I even flexed my creative muscles and added some color to the narration. All in all, running this game was a great experience that let me combine many things I enjoy, and I’ll probably do it again.
Airtable, the spreadsheet-database-hybrid and fan favorite of the no-code community, has announced a $185M VC funding round and a set of new features. I’m often concerned about large funding rounds. If a VC-driven company doesn’t live up to their hype, they either fold or end up in a larger tech giant’s portfolio. In the case of Airtable, however, I strongly believe in the product and its potential. I have used Airtable in a client project before to replace a home-grown CRM. It’s part of the phpMAE-driven email handling setup I mentioned in my last post. In another current project, most non-personal data resides in Airtable so that their interface can act as an admin interface, saving implementation time for internal tooling.
fetch(), which means you can already integrate with everything that has an API.
The third feature is Sync and, as far as I understood it, allows mirroring data between different bases in Airtable. I won’t go into more detail here.
With the combination of Apps and Automations, Airtable fits perfectly into a product development philosophy of minimum coding, or an efficiencer mindset. It shows that developing applications with code and a no-code approach aren’t at odds. First, you start with existing tools and try to map your workflows in them. Then, build iPaaS-style no-code integrations to connect the tools for more advanced workflows. Then, once you reach that stage, write custom code to extend your tools with plugins and set up advanced integrations.
One of the reasons I’m excited about APIs is their role as a bridge between the worlds of software development and no-code app building. I believe this will become increasingly important, and we’re not talking about it enough yet. It will have implications for many stakeholders, including business owners, software engineers, API designers, and technical writers. I’m still searching for ways to understand these developments and articulate my understanding. Every post on this blog is a step in that direction.
I wanted to follow up on my last post about “serial focus” and “parallel focus” by expanding on the idea that doing multiple things doesn’t mean you can do everything, and you can’t approach everything with full-scale perfectionism.
Yes, you can be the person who runs two companies at once. Still, you probably can’t be the person who runs two companies at once, has a perfectly tidy house, cooks excellent meals from scratch every day, and never forgets a single birthday of an acquaintance or family member. And if you believe that it’s important to remember birthdays, you reasonably send your contacts a text message instead of a hand-drawn postcard. And it’s okay. You have chosen the path of the entrepreneur, and it comes with trade-offs. Of course, that applies to every other set of priorities. You may value your personal life and hobbies over your career, and that’s also okay.
The problem is that we see high achievers in the press and on social media, and we believe they’re perfect in every way, but every successful person probably has at least one area in their life at which they suck. However, they likely have people who have their back to provide the perfect appearance, or they communicate only their strengths while hiding their weaknesses.
There’s another aspect to it. I came across a tweet from Tiago of Forte Labs. He tweeted: “I really value conforming in most areas of work and life, so that in the few you don’t, you can create something truly new and beautiful. If you try to innovate on everything, you end up innovating on nothing”.
That tweet hit me because it’s related to perfectionism for me. I often feel it as the need to do things differently. For example, I have a set of custom-coded phpMAE classes that handle incoming email. But I’m not running an email service. Email is one of the things that should work so I can communicate with clients, business partners, and friends. Of course, experimenting and innovation are excellent, and using phpMAE for this is “selfdogfooding”. It still means I sometimes have to debug code when I just want to read an email. One related area where I’ve made the other choice is to use a micro.blog-hosted blog instead of rolling my own.
More generally, and to return to the high achiever from above, they are presumably quite dull in many areas of their life. Unless they are an eccentric millionaire celebrity, they probably live in a regular house, have a normal relationship and family, and buy and use the same items and tools as the rest of us (or their premium versions). The backdrop of everyday life allows them to make their achievements. It’s something worth thinking about more often.
It’s time to make an announcement that I’ve been holding back for a while: I’m working on a book! The name of the book is “Designing APIs with Swagger and OpenAPI”. I’m co-authoring the book with Josh Ponelat, a developer for SmartBear software, the maintainers of the Swagger suite of OpenAPI tools. Manning Publications release the book, and it’s already available as a live preview on MEAP, the Manning Early Access Program.
As I joined the project as a co-author later when Josh had already worked on and published some chapters individually, I wanted to wait to announce my co-authorship until Manning added the first of my work to MEAP. They did it last week, so I’m now happy to make this post.
I’ve done a bit of technical writing for clients and my projects and blogs, and I work on API documentation as a freelancer, but all of that is short-form writing such as articles and tutorials. A technical book is a challenge on another level because you have to have an overarching idea of the material that you want to teach and then plan how to present the material. It’s an excellent opportunity to learn and improve as a writer and experience a professional publishing process. I gladly accepted that challenge when Manning contacted me and asked me if I wish to join the project.
The book is for everyone who wants to get into designing APIs. The first part of the book explains the basics of APIs and the OpenAPI specification. The second part, which we’ve started to publish now, walks through an API Design First web application project with a fictional development team. Together we’re going from idea collection through domain modeling and user stories to a well-designed API in CRUD style that connects frontend and backend of the web application. We’re also exploring the ways to implement this design as code and the challenges along the way. The third part of the book will extend and scale this project’s API, and the fourth part covers various advanced OpenAPI and API life cycle topics. Josh and I are eagerly working on these parts, and we’re looking forward to delivering the full book to you as soon as possible.
You can preorder the book “Designing APIs with Swagger and OpenAPI” on Manning’s website now, get immediate access to MEAP and receive a full ebook or physical book later.
In my last post, I wrote about micro focus and macro focus, and how I feel I’m good at micro but bad at macro focus, whereas I observed the opposite in many people. Today I want to follow up and share another observation that is related to macro focus. I call it “serial focus” and “parallel focus”, for lack of better terms (if you have any, please share).
Some people have this one big idea, the main thing about their life. They have figured out their macro focus. Other people have multiple ambitious plans, but they apply serial focus, which means they focus on one thing at a time. The serial entrepreneur is the most famous example. These people pour all their expertise and dedication into an idea and either make it successful or find the right point to pivot or give up and change their focus. Once a project has reached a certain level of success, they sell it and start the next venture. Of course, they may have overlapping periods or prepare their next big idea, but they know their macro focus for a given time. Outside of entrepreneurship, it is also the nature of the stereotypical geek who can get obsessed with something and learn everything before the next obsession kicks in.
The opposite, “parallel focus”, is an oxymoron, because the very definition of focus is to limit oneself to a single thing. There are too many exciting things, and it’s hard to decide, so the person with parallel focus tries to make everything a priority at once. Quite often, I am that person. Again, there’s the example of the parallel entrepreneur. I’ve seen that term used for people involved in multiple ventures as a founder and for people who bootstrap a company while having a regular job or a contract. And there are successful examples of those, so is it possible to maintain parallel focus after all? I believe some considerations are vital unless you want to be on the highway to burnout.
Focusing on multiple things at the same time doesn’t mean you can do everything. You still have to say “No” some times. And I feel saying “No” is now even more difficult because adding a fourth item to a list of three seems less of a deal than removing your single macro focus. Also, approach your goals and the perfectionism about achievements and results realistically. You cannot compare a side project to something that another person dedicated their life and resources to. Finally, understand that there cannot be a perfect balance. Priorities change over time. You cannot expect to make progress in every area every single day.
So if like me, you’re struggling with missing macro focus and find yourself unable to serialize your priorities and approach them one after another as serial focus, I wish you good luck at maintaining parallel focus. Still, please be aware of the consequences and limitations.
In the realm of productivity, I think there are two types of focus. You could call them “micro focus” and “macro focus”. I came up with the distinction as it has been on my mind a lot lately. A quick search before I started writing showed me that I’m not the only one making this distinction. What I found intriguing is that William Webb, the author of that article, said that he’s much better at macro focus than at micro focus. It has been my observation that most people are like him, but it’s been a different experience for me.
But first, let’s define our terms, and my definition might be slightly different from his. For me, micro focus is about being able to choose a task, ignore others that are not relevant at that moment, and get “in the zone” where you can perform without getting distracted easily. Macro focus is about having goals and clear priorities and not holding too many projects and responsibilities at once.
I am good at micro focus. Using the Pomodoro technique was very helpful in getting there. Of course, I sometimes procrastinate when I am not sure how to proceed, but once I’m tackling a project, I stay on it. I’ve seen so many people for whom the incoming email, the person walking outside the window, etc. is always more exciting than the thing they’re doing, or who switch from one task or topic to another the moment they feel like it.
On the other hand, I think that many people have a better macro focus. They decide on a job or a personal priority or a side-project and then either keep at it or drop it after a deliberate decision that other things are more important. For me, so many projects sound exciting, and I want to be a part of them. Few things are good enough that I want to focus exclusively on them, though, not even for days or weeks. And there’s practically nothing that I’m doing that I want to get rid of entirely, probably because of a sunken cost fallacy but also the optionality fallacy and keeping all options open forever (which isn’t very sustainable)..
It is one reason I’m freelancing with multiple customers and doing other projects simultaneously: having options, and not buying into one thing too much. I guess my fascination with the world of APIs comes from a similar sentiment. I’ve always been an “and” person, not an “or” person. Cooperation between competition. The choice, comparison, and flamewars between technologies or tools A and B aren’t remotely as exciting as the integrations and standards that build bridges between the two.
As part of CloudObjects, I’m working on phpMAE. The PHP Micro API Engine is an opinionated serverless framework. One of its features is that it exposes any class methods as JSON RPC API calls. For public APIs, there’s a new way to make those calls to test a phpMAE class: straight from the CloudObjects directory. It may be a toy feature right now, but I consider it one of the first building blocks for unlocking the future potential of phpMAE.
To demonstrate the functionality, I made an ASCII Art sign generator as a small example. You can view its source code and try it on its directory page. If you are curious to learn more about this new phpMAE feature and how to create your public PHP class, read the full article “ Playing with public phpMAE classes in the directory” on the CloudObjects blog.
One of my topics of interest is the future of work. I believe that digital transformation fundamentally changes the way we work. Two of the aspects often mentioned are the rise of remote work and more arrangements outside of traditional salaried employment. So far, I mostly considered these as independent developments that are aspects of increased flexibility when it comes to working. Then, however, I read an article titled “The Workforce Is About to Change Dramatically”.
The piece in The Atlantic covers multiple aspects. For example, how the increase in remote work could either hurt or transform the travel and hospitality industries. However, my key takeaway from the article was the connection the author drew between a rise in micro-entrepreneurship and working from home.
When you work remotely as an employee, it changes the relationship with your co-workers. I don’t want to say “weaken”. Interactions on Slack or Zoom can be intense, but it’s different. More importantly, however, online communications flatten the world, because it doesn’t make a difference whether the person sits in the next room or on the other side of the world. And it isn’t relevant whether they work on your team or whether you interact with them in a different community, such as a personal or professional shared interest group. On the Internet, you can pick your tribe instead of having to mingle with the folks you share an office them.
If you are sitting in your home, you’re first and foremost alone and working on your own, but your virtual connections can go anywhere. You may realize that there isn’t a significant difference in whether you do your work for your team or sell it on an open marketplace where you might enjoy even more freedom, flexibility, and additional money. Of course, concepts like the gig economy or passion economy are not only fueled by people sitting at home and having time to rethink their relationship with their employer, but I agree with the author that it’s a crucial aspect of it.
My relationship with social media in general and Twitter, in particular, has been a recurring topic on this blog. I like discovering people and stuff through my news feeds, but a lot of the time, they are just a big timesink that drives FOMO and keeps me from doing other things.
According to RescueTime (⇠ referral link), I spent nearly 30 hours on websites in the Social Networking category last month, with around 22 hours dedicated to Twitter alone. (If you feel the total number isn’t too high, consider that it’s only desktop usage during working hours and doesn’t include using the phone under the blanket.)
Facebook, on the other hand, accounts for less than an hour. You could argue that the people and topics on Facebook are just dull, but the primary reason is that I’m no longer reading the news feed. I log in to Facebook only to see my notifications or directly interact with people. To facilitate that, I use a Firefox add-on called “Disable Facebook News Feed”, a minimalist tool that takes the feed out.
There was no similar add-on for Twitter, so I created one for myself. I had used the Facebook add-on for inspiration, but Twitter’s latest redesign doesn’t allow a simple CSS rule for feed removal. So I had to do something more sophisticated; if you’re interested in the solution, see the source on GitHub.
If you are curious about trying this add-on, you can get “Disable Twitter Feed” from the Firefox add-on directory. I marked it as experimental as it’s an early version that probably needs some fixes and improvements. Let me know what you think about it. If there’s sufficient demand, I’ll look into testing and submit it to other browsers, too.
To all the people that I follow on Twitter: I still love you! Feel free to tag me in conversations or DM me, and I’ll probably see it. At the moment, however, I have to prioritize my sanity and productivity over your great thoughts and interesting links.
In 2012, Evgeny Morozov wrote an opinion piece in the New York Times titled “The Death of the Cyberflâneur”, of which I no longer remember if I encountered it back then. Recently, however, I learned the concept of flânerie in online interaction with self-proclaimed flâneuse Patricia Hurducas. She pointed me to this article. (By the way, I also covered another article by Morozov on this blog last year.)
If you’re not familiar with the concept: in a nutshell, a flâneur goes for strolls in urban places, which he experiences as a mindful outside observer. He doesn’t blend in and doesn’t follow a specific goal or purpose. His limited interactions are the result of serendipity and not a plan. The concept often appears in literature and is associated with an early metropolis like 19th century Paris.
The Internet (or, more specifically, the world wide web), with its millions of individual websites, would be a paradise for people to “surf” through, as we used to call it back in the day. Today, however, most online interactions are commercial and either optimized to get things done as quickly as possible or let ourselves distract as smoothly as possible. Unlike the flâneur, who performs his pursuit in solitude, many online activities are social. (That is, if I may add, also in stark contrast to how we pictured the first explorers of the online world as nerds without a social life, not realizing that even early BBS, the Usenet, IRC, etc. were social. We didn’t have the bandwidth and computation power for audio and video streams, but text interactions between people happened.)
In some ways, Morozov’s eight-year-old article is outdated. For example, he talks about Facebook’s vision of frictionless sharing. They had this idea that you’d log in with your Facebook account to apps and websites that would send everything you do back to the mothership. Facebook would aggregate and package that information and show you what your friends are reading, listening to, cooking, or whatever. Today we know that this idea didn’t take off, and Facebook did a 180° turn and locked their posting API completely. Nobody can automatically feed information to personal Facebook profiles anymore. Even Zuckerberg had to realize people want some privacy and do things without being continuously connected to their friends. Of course, talking about privacy, we hardly encounter websites without tracking codes (even I added one lately) anymore. Still, these generate anonymous profiles that rarely surface directly in our social sphere. And I feel we have a renaissance of smaller communities and dedicated places for social exchange disconnected from the rest of our (professional) online lives.
In other ways, however, Morozov is still right. We don’t “surf” the web anymore but mostly read chronological or algorithmic news feeds. It’s effortless and triggers the release of dopamine through its constant novelty. There is, however, still a web outside social media and large commercial estates. Some people write blogs (like me) that are still chronological but less noisy than social media updates. Other people create personal websites in the form of digital gardens to share their knowledge. The IndieWeb community is a place for creators of a diverse web. I wrote about the subject two months ago when I argued why I like blogging but not publishing a digital garden.
However, very often, the discussion centers around creation and not consumption. Even then, IndieWeb readers and RSS clients often mimic feeds and emphasize the efficient access to sources we already follow. Let’s look at our patterns of content consumption and bake some time for serendipitous discoveries into it. When was the last time you “surfed” the web, starting with some personal website, following links, almost getting lost, but finding something interesting in the process? I think it has been a while.
We should sometimes turn off social media and go outside in the physical world. But we should also sometimes turn off social media but stay online and become cyberflâneurs again!
I’ve struggled a bit with the question of whether I should set up some analytics on my blog and profile website. So far, I had none, and Google Analytics, the obvious choice, was a no-go. I think Google already has a lot of power and data, and I didn’t want to feed them if I could avoid it. On the other hand, I write this blog not just for myself, but I also consider it a marketing tool for my freelance business and other professional and entrepreneurial goals. I have shared what other people think about content marketing for developers, and at least two have emphasized the importance of analytics. Having some insights into whether someone reads this at all would certainly be helpful. If I can support an independent, privacy-focused small business in the process, it’s even better.
There are a few of these smaller analytics providers. My choice was to go with Plausible Analytics. Their product is minimalist, but with all the essential features. They are fully open-source, hosted in Europe, and work without cookies. Also, out of its competitors, it’s probably the most affordable option, starting at $48 per year (I want to support an independent founder but also have to mind my business expenses).
As you may know, my blog is hosted by micro.blog. And Manton, the founder of micro.blog, recently added a plug-in feature based on Hugo themes. Therefore, instead of just modifying my blog theme, I decided that I could build a tiny plug-in to simplify installation for me and others. It’s just a few lines of code and configuration, and you can find the plug-in source code on GitHub. It took me a single Pomodoro session to develop.
Within less than 24 hours, Manton shared it and added it to the plug-in directory, and also Plausible listed it as integration in their documentation. Also, there is at least one member of micro.blog that started the free trial of Plausible Analytics.
Last but not least, I have decided to be fully transparent and make my analytics page public so that you can have insights into this blog, too.
Adam DuVander talks a lot about the idea of “signature content”. I recently published my recap of an interview with Adam, where he mentioned that term, and since then, I came across another great piece from him. He calls it the “developer content mind trick”. The idea is that companies should not just publish content about using and integrating their APIs and products, but also explain how they built their service and how they solved the underlying problems.
Now, one might argue that this is giving away valuable intellectual property and allows people to copy you more easily. And, indeed, it can happen, but it probably would even if you didn’t publish. In the developer space, a significant competitor for every product is that a potential customer builds it instead of buying an existing solution. It’s the NIH syndrome - “not invented here”. Developers often see a product and think, “yeah, I could build that in a weekend”. It is where the mind trick comes in. You show how much effort went into the product and all the little corner cases that you’ve thought of that the potential consumer hasn’t. Hence, you demonstrate your expertise as a company and the value of your product.
In my last article, I mentioned the term “cornerstone content”. It comes from Jake Jorgovan, an entrepreneur running marketing agencies for lead generation. He is not in the developer space, yet a lot of his ideas are similar. Let me add two quotes from his e-book, “The consultant’s path to thought leadership”.
“By giving our secrets away, we established ourselves as the leaders in the field. This built incredible trust among clients and referrals from our audience.”
“For a small firm, your trade secrets can create far more value as marketing materials than you can by holding them close to your chest. You can teach everything you know, and use that to attract more deals and opportunities your way.”
It doesn’t matter if you call it signature content or cornerstone content. Two examples from different areas show how powerful it can be to produce great content about what your company does and how you do it and then leverage it to drive sales. To some extent, I’m doing something similar on this blog. I write about developer tutorials and content marketing, and I quote experts in the field, probably driving some of my readers to learn writing or hire them instead of me. But I bet that I can demonstrate my expertise in aggregating knowledge and connecting the dots between different ideas, and people want to work with me because of that.
And that is where you’ve reached my sales pitch. One thing I do is helping companies with developer marketing by creating technical content. Talk to me, and we’ll find out if I can help you.
Sometimes people talk about the death of television. Who needs TV when we have Amazon Prime Video? However, we also still have printed newspapers and magazines, and we nevertheless have broadcast radio. Admittedly, each of these media has gone through a transformation and is less relevant as it used to be, but they are far from dead. The same thing is happening with TV. Even though I have no professional relationship with that field, I find it extremely interesting to observe the TV market.
First of all, to clarify our terms, I think that TV covers three things: Broadcast technology (in opposition to streaming), linearly scheduled programming (as opposed to on-demand access), and traditional brands and corporations. In some ways, these will prevail, at least partially.
Broadcast technology is energy efficient since there is only one signal sent for multiple subscribers. It is also reliable and doesn’t go down with spikes in viewership. Most of all, it is private. Websites and smart TVs track you with ad-tech, but nobody knows whether you’ve tuned into a broadcast. What makes it difficult for broadcasters to know their audience is a win for privacy. Therefore, even with all advances in streaming, I don’t think it’s a good idea to get rid of broadcast technology, at least as a fall-back; in the same way, we may want to preserve the POTS (plain old telephone system) even though we have VoIP.
Linearly scheduled programming looks like a downside at first, but live streams are a huge trend when you observe social media. Whether it’s TikTok, Instagram, or YouTube, every platform has a live streaming feature. Twitch thrives exclusively on them. In business, we sign up for webinars. In developer relations, two companies have recently launched video portals with developer content that mimic TV stations, Cloudflare TV, and Microsoft Learn TV (and I’m glossing over those with regular Twitch schedules). We even sync on-demand content through Netflix Party. It seems that we still like to watch things as they happen and at the same time as other people experience them.
When looking at traditional brands and corporations, it’s interesting to see how they try to transform into the digital age. There will undoubtedly be winners and losers. TV stations and content owners are launching streaming services and joining the “streaming wars”, Disney Plus being the most prominent and successful example so far.
In Germany, where I live, the two public broadcasters ARD and ZDF, funded by mandatory household media licenses, have invested a lot into turning their websites and apps into Netflix-esque on-demand libraries. They also launched funk, a content network that produces videos mostly for younger audiences, that they exclusively distribute online, mostly on commercial social media platforms and YouTube. While some might frame this as a desperate attempt at staying relevant, the best German-speaking YouTube content is now often made by funk (I admit to being a huge fan of Philipp Walulis’ work).
German private broadcaster RTL has launched TVNOW, whereas other private broadcasters have teamed up to build Joyn. The latter is particularly exciting because it features both on-demand and live content, including streams for almost all public and private channels (except those that RTL owns, whose live feeds are on TVNOW). In some ways, Joyn is similar to Hulu in the US. They try to establish a brand itself while simultaneously showcasing the brands of the TV stations that deliver the content. They also include content libraries from other online publishers, and their Joyn originals often feature influencers or YouTube personalities. Again, this might seem like a mash-up of unrelated things, but it could also be the perfect strategy to bridge the gap between the old and the new worlds of entertainment. I sincerely root for their success, because I think we need a German or European-owned Netflix. If Joyn plays their cards right, they can play that role.
Kin Lane, the API Evangelist, wrote about API providers being API matchmakers. I believe that idea goes along very well with content marketing for developers through tutorial-style developer content. In the article, which I discovered on APIscene, but Kin originally posted on his blog earlier this year, he claims that the value of a single API is limited, or at least not very visible. The real power comes from the combination of multiple different APIs. API providers need to know how their API fits into the broader landscape of APIs that their customers already use or might want to use. That awareness helps to communicate the value of your product. Kin suggests that API providers should have integration pages and making sure their API is also available on iPaaS providers (think IFTTT or Zapier).
While I agree that a presence on iPaaS providers should be a milestone in every API program and that integration pages are an essential element of developer portals, they are not enough. Kin writes about playing with different APIs in Postman and finding connections. Some of an API provider’s customers might need a little hand-holding to do that. That’s where developer tutorials come in. In their basic form, they typically show how to use an API in a specific programming language or framework. However, they can cover more than that and show integrations with other APIs as well. A good current example is “Build a Workout Tracker with GraphCMS, Auth0 and Hasura” from Jesse Martin of GraphCMS, where he showcases the value of GraphCMS by connecting the product with Auth0 and Hasura.
The great thing about what could be called a combinational developer tutorial is that it adds value for all products and APIs involved. Developer content like this piques the interest of multiple developer communities. It builds bridges, thus making it a piece of content with substantial value and a great, shareable marketing tool.
You read this and have some ideas for integrations between APIs and developer products but lack the time or skills to write a tutorial? Then, come and learn about my developer content production services on my website and contact me to find out how we can work together.
The Interintellect is, according to its Twitter bio, a “global community and talent platform for public intellectuals”. I discovered the Interintellect a while ago through its ties with Ness Labs and the Roam Research user community and read its manifesto. While I could personally relate to some of the things written in it, I found it initially hard to wrap my head around what the community is.
The Interintellect offers virtual salons on the Zoom videoconferencing app. Each of these three hour-long group discussions (10-20 people) has a specific topic. I joined three of them already. My first was about entrepreneurship, specifically asking whether there are too many entrepreneurs in the world. The second salon dealt with slow and fast thinking, as in Daniel Kahneman’s model. Finally, the third conversation was about reputation and how it works in our globally connected world. I enjoyed listening in and adding my comments and left each of these discussions with new insights.
A few days later, there was an exchange on Twitter where Seyi Taylor, one of the other participants, wondered why discussions at these salons “are so devoid of ego”. He subsequently pointed to an episode of the MetaLearn podcast in which Anna Gát, the founder of the Interintellect, was interviewed. The things Anna said in the interview and the discussion on Twitter gave a few pointers, but one central aspect is probably the type of people that the community attracts. According to Anna, there’s enormous diversity, not just between the people but also that most individuals are multidisciplinary. Folks are very open to new ideas. Many of them have some notion of otherness (e.g., because they are migrants), and others are “restarters”. They are givers instead of takers. What everyone has in common is that they want to nurture their “intellectual life”, an aspect that is often left behind work, family, and other aspects of life.
Without trying to take anything away from the Interintellect or diminish Anna’s skill as a host and leader, these exchanges are not exclusive to that community. I experienced similar discussions in a philosophical group I had with friends in college or right now in my Effective Altruists’ local group. There are places for a genuine exchange where people come to learn and exchange ideas. I believe it also helps that they are non-competitive, which means they are deliberately designed as an incubator, not as a “battlefield” of ideas, and also that the participants do not compete outside the space, for example, for jobs or research grants. The latter being a direct result of diversity.
I want to add another related thought: in these virtual or physical spaces, you realize that everyone present is smart and thoughtful and capable of understanding various notions, but each individual’s expertise and experience is different. They all are impressive in their way. That is not a place to impress others with what you know. Still, after going through an initial “imposter’s syndrome feeling” being among these fantastic people, you find out that you also have something unique to yourself to add to the table. And that’s where the magic happens!
Adam DuVander is a journalist turned developer-focused company content strategist. I recently listened to an interview with Adam, which was part of the Sprinklr Coffee Club series. On this blog, I’ve previously posted short summaries of talks, podcasts, or books by Stephanie Morillo, Lauren Lee, Hiten Shah, and Lorna Mitchell, combined with my thoughts on the respective subjects. In a similar format, I want to reiterate some of Adam’s ideas as well.
To motivate the work on developer content, Adam said that content marketing as a part of developer marketing or developer relations (DevRel) scales better than sending developers to conferences and meetups. If you’re just getting started, you can experiment with blog posts. However, he noted that many APIs don’t even have a real “Getting Started” guide as part of their API documentation, so that’s also an excellent place to start.
A central piece of content should be “a definitive guide on what the company knows”. Often, it is a downloadable e-book or whitepaper, but Adam said to be wary of gating access (e.g., with email signup). He calls this “signature content”. I recently saw another content marketer describing a similar approach who called it “cornerstone content”. The idea is to show your full expertise and demonstrate thought leadership. It ties in with the intention of content reuse and multiplication, where one piece of content leads to many derivatives. I’ve seen a lot of examples of those, such as infographics, social media posts, transcripts of podcasts, and many more. The “signature content” can be the foundation of everything else.
Content is a long game (it is one of the truths that Hiten Shah also emphasized in his talk). And it is crucial to be aware of it to avoid overblown expectations. No respectable content marketer or SEO agency can promise overnight success! You have to plant a lot of seeds, evaluate, and double down on what works. It’s also a good idea to have a mix of evergreen content and short-term content that has viral potential.
Another great thought from Adam, who previously worked at ProgrammableWeb, was that producing a high volume of content is essential when advertisers fund you. For everyone else, including most dev-focused SaaS companies, high quality and relevance are way more important than quantity. And remember, the goal of technical writing is to “share knowledge, not features”.
At the end of the post, I wanted to let you know I’m happy to talk about your developer content. Send me an email and let’s find out how we can work together.
I’ve been using the Pomodoro technique for most of my work for a couple of years, which has been a great productivity tool. Working in time-boxed blocks helps me keep focused without distractions. I recently learned about Work Cycles, which is a similar but even more structured technique. In addition to 30-minute blocks of focused work followed by 10-minute breaks, it includes specific questions for more mindful productivity, such as setting goals and evaluating one’s energy levels. The system also works great when combined with social accountability, and that’s how I learned about it.
Being a subscriber to the NessLabs newsletter already, I decided to support their community with a paid membership and joined their forum lately as well. In the “Events” section, I saw a thread about Work Cycles, calling it “a group Pomodoro work session”, a description that piqued my interest. I signed up for the first Saturday event, as I thought I could use some motivation to catch up with work over the weekend and joined the call yesterday.
There were six of us in a Zoom call. Kristijan, our host, asked each of us what we wanted to tackle in the session. Coincidentally, all of us were planning on doing something on a tech-related topic: learning, writing, or coding. I usually don’t have a problem working on my own and motivating myself (otherwise, it would be tough when you’re self-employed). Seeing this group of yet strangers working on something similar on their Saturday, however, immediately made me change my morale rating from three up to five out of five.
We went through three 30-minute blocks. Kristijan always gave us two minutes for preparation and evaluation, which we were allowed but not obligated to share, and set the timer for work. He also led the conversation about our experiences during breaks and in the debrief following the session. At least one other participant had experience with the Pomodoro technique, whereas another person meant they’re usually working in longer blocks. We also talked about a service called Focusmate that offers a similar format in a one-on-one setting.
I don’t think this is something I would do every day. Still, I can very much imagine doing it weekly to get some additional motivation, connect with people, and talk productivity.
There are various formats to describe data models. One of my favorite approaches is Linked Data based on RDF. That is why I based CloudObjects on this technology. My idea was to use RDF to describe APIs and the configuration of various application components. I quickly realized that a semantic web platform with built-in distribution and access controls has more use cases.
For a more approachable start, I’ve created a demonstration of CloudObjects Core using, wait for it, pizza! You can check out my latest post, “Pizza Time! Using CloudObjects Core for Domain Models”, on the CloudObjects Blog. As always, I’m happy for any feedback on the article!
Also, in case you didn’t know, I create developer content for third-party companies as a freelancer. If you liked the style and content of my tutorial linked above, hit me up, and we can discuss how I can create something similar for your product.