Lukas Rosenstock's Blog

Sometimes people talk about the death of television. Who needs TV when we have Amazon Prime Video? However, we also still have printed newspapers and magazines, and we nevertheless have broadcast radio. Admittedly, each of these media has gone through a transformation and is less relevant as it used to be, but they are far from dead. The same thing is happening with TV. Even though I have no professional relationship with that field, I find it extremely interesting to observe the TV market.

First of all, to clarify our terms, I think that TV covers three things: Broadcast technology (in opposition to streaming), linearly scheduled programming (as opposed to on-demand access), and traditional brands and corporations. In some ways, these will prevail, at least partially.

Broadcast technology is energy efficient since there is only one signal sent for multiple subscribers. It is also reliable and doesn’t go down with spikes in viewership. Most of all, it is private. Websites and smart TVs track you with ad-tech, but nobody knows whether you’ve tuned into a broadcast. What makes it difficult for broadcasters to know their audience is a win for privacy. Therefore, even with all advances in streaming, I don’t think it’s a good idea to get rid of broadcast technology, at least as a fall-back; in the same way, we may want to preserve the POTS (plain old telephone system) even though we have VoIP.

Linearly scheduled programming looks like a downside at first, but live streams are a huge trend when you observe social media. Whether it’s TikTok, Instagram, or YouTube, every platform has a live streaming feature. Twitch thrives exclusively on them. In business, we sign up for webinars. In developer relations, two companies have recently launched video portals with developer content that mimic TV stations, Cloudflare TV, and Microsoft Learn TV (and I’m glossing over those with regular Twitch schedules). We even sync on-demand content through Netflix Party. It seems that we still like to watch things as they happen and at the same time as other people experience them.

When looking at traditional brands and corporations, it’s interesting to see how they try to transform into the digital age. There will undoubtedly be winners and losers. TV stations and content owners are launching streaming services and joining the “streaming wars”, Disney Plus being the most prominent and successful example so far.

In Germany, where I live, the two public broadcasters ARD and ZDF, funded by mandatory household media licenses, have invested a lot into turning their websites and apps into Netflix-esque on-demand libraries. They also launched funk, a content network that produces videos mostly for younger audiences, that they exclusively distribute online, mostly on commercial social media platforms and YouTube. While some might frame this as a desperate attempt at staying relevant, the best German-speaking YouTube content is now often made by funk (I admit to being a huge fan of Philipp Walulis’ work).

German private broadcaster RTL has launched TVNOW, whereas other private broadcasters have teamed up to build Joyn. The latter is particularly exciting because it features both on-demand and live content, including streams for almost all public and private channels (except those that RTL owns, whose live feeds are on TVNOW). In some ways, Joyn is similar to Hulu in the US. They try to establish a brand itself while simultaneously showcasing the brands of the TV stations that deliver the content. They also include content libraries from other online publishers, and their Joyn originals often feature influencers or YouTube personalities. Again, this might seem like a mash-up of unrelated things, but it could also be the perfect strategy to bridge the gap between the old and the new worlds of entertainment. I sincerely root for their success, because I think we need a German or European-owned Netflix. If Joyn plays their cards right, they can play that role.

Kin Lane, the API Evangelist, wrote about API providers being API matchmakers. I believe that idea goes along very well with content marketing for developers through tutorial-style developer content. In the article, which I discovered on APIscene, but Kin originally posted on his blog earlier this year, he claims that the value of a single API is limited, or at least not very visible. The real power comes from the combination of multiple different APIs. API providers need to know how their API fits into the broader landscape of APIs that their customers already use or might want to use. That awareness helps to communicate the value of your product. Kin suggests that API providers should have integration pages and making sure their API is also available on iPaaS providers (think IFTTT or Zapier).

While I agree that a presence on iPaaS providers should be a milestone in every API program and that integration pages are an essential element of developer portals, they are not enough. Kin writes about playing with different APIs in Postman and finding connections. Some of an API provider’s customers might need a little hand-holding to do that. That’s where developer tutorials come in. In their basic form, they typically show how to use an API in a specific programming language or framework. However, they can cover more than that and show integrations with other APIs as well. A good current example is “Build a Workout Tracker with GraphCMS, Auth0 and Hasura” from Jesse Martin of GraphCMS, where he showcases the value of GraphCMS by connecting the product with Auth0 and Hasura.

The great thing about what could be called a combinational developer tutorial is that it adds value for all products and APIs involved. Developer content like this piques the interest of multiple developer communities. It builds bridges, thus making it a piece of content with substantial value and a great, shareable marketing tool.

You read this and have some ideas for integrations between APIs and developer products but lack the time or skills to write a tutorial? Then, come and learn about my developer content production services on my website and contact me to find out how we can work together.

The Interintellect is, according to its Twitter bio, a “global community and talent platform for public intellectuals”. I discovered the Interintellect a while ago through its ties with Ness Labs and the Roam Research user community and read its manifesto. While I could personally relate to some of the things written in it, I found it initially hard to wrap my head around what the community is.

The Interintellect offers virtual salons on the Zoom videoconferencing app. Each of these three hour-long group discussions (10-20 people) has a specific topic. I joined three of them already. My first was about entrepreneurship, specifically asking whether there are too many entrepreneurs in the world. The second salon dealt with slow and fast thinking, as in Daniel Kahneman’s model. Finally, the third conversation was about reputation and how it works in our globally connected world. I enjoyed listening in and adding my comments and left each of these discussions with new insights.

A few days later, there was an exchange on Twitter where Seyi Taylor, one of the other participants, wondered why discussions at these salons “are so devoid of ego”. He subsequently pointed to an episode of the MetaLearn podcast in which Anna Gát, the founder of the Interintellect, was interviewed. The things Anna said in the interview and the discussion on Twitter gave a few pointers, but one central aspect is probably the type of people that the community attracts. According to Anna, there’s enormous diversity, not just between the people but also that most individuals are multidisciplinary. Folks are very open to new ideas. Many of them have some notion of otherness (e.g., because they are migrants), and others are “restarters”. They are givers instead of takers. What everyone has in common is that they want to nurture their “intellectual life”, an aspect that is often left behind work, family, and other aspects of life.

Without trying to take anything away from the Interintellect or diminish Anna’s skill as a host and leader, these exchanges are not exclusive to that community. I experienced similar discussions in a philosophical group I had with friends in college or right now in my Effective Altruists’ local group. There are places for a genuine exchange where people come to learn and exchange ideas. I believe it also helps that they are non-competitive, which means they are deliberately designed as an incubator, not as a “battlefield” of ideas, and also that the participants do not compete outside the space, for example, for jobs or research grants. The latter being a direct result of diversity.

I want to add another related thought: in these virtual or physical spaces, you realize that everyone present is smart and thoughtful and capable of understanding various notions, but each individual’s expertise and experience is different. They all are impressive in their way. That is not a place to impress others with what you know. Still, after going through an initial “imposter’s syndrome feeling” being among these fantastic people, you find out that you also have something unique to yourself to add to the table. And that’s where the magic happens!

Adam DuVander is a journalist turned developer-focused company content strategist. I recently listened to an interview with Adam, which was part of the Sprinklr Coffee Club series. On this blog, I’ve previously posted short summaries of talks, podcasts, or books by Stephanie Morillo, Lauren Lee, Hiten Shah, and Lorna Mitchell, combined with my thoughts on the respective subjects. In a similar format, I want to reiterate some of Adam’s ideas as well.

To motivate the work on developer content, Adam said that content marketing as a part of developer marketing or developer relations (DevRel) scales better than sending developers to conferences and meetups. If you’re just getting started, you can experiment with blog posts. However, he noted that many APIs don’t even have a real “Getting Started” guide as part of their API documentation, so that’s also an excellent place to start.

A central piece of content should be “a definitive guide on what the company knows”. Often, it is a downloadable e-book or whitepaper, but Adam said to be wary of gating access (e.g., with email signup). He calls this “signature content”. I recently saw another content marketer describing a similar approach who called it “cornerstone content”. The idea is to show your full expertise and demonstrate thought leadership. It ties in with the intention of content reuse and multiplication, where one piece of content leads to many derivatives. I’ve seen a lot of examples of those, such as infographics, social media posts, transcripts of podcasts, and many more. The “signature content” can be the foundation of everything else.

Content is a long game (it is one of the truths that Hiten Shah also emphasized in his talk). And it is crucial to be aware of it to avoid overblown expectations. No respectable content marketer or SEO agency can promise overnight success! You have to plant a lot of seeds, evaluate, and double down on what works. It’s also a good idea to have a mix of evergreen content and short-term content that has viral potential.

Another great thought from Adam, who previously worked at ProgrammableWeb, was that producing a high volume of content is essential when advertisers fund you. For everyone else, including most dev-focused SaaS companies, high quality and relevance are way more important than quantity. And remember, the goal of technical writing is to “share knowledge, not features”.

At the end of the post, I wanted to let you know I’m happy to talk about your developer content. Send me an email and let’s find out how we can work together.

I’ve been using the Pomodoro technique for most of my work for a couple of years, which has been a great productivity tool. Working in time-boxed blocks helps me keep focused without distractions. I recently learned about Work Cycles, which is a similar but even more structured technique. In addition to 30-minute blocks of focused work followed by 10-minute breaks, it includes specific questions for more mindful productivity, such as setting goals and evaluating one’s energy levels. The system also works great when combined with social accountability, and that’s how I learned about it.

Being a subscriber to the NessLabs newsletter already, I decided to support their community with a paid membership and joined their forum lately as well. In the “Events” section, I saw a thread about Work Cycles, calling it “a group Pomodoro work session”, a description that piqued my interest. I signed up for the first Saturday event, as I thought I could use some motivation to catch up with work over the weekend and joined the call yesterday.

There were six of us in a Zoom call. Kristijan, our host, asked each of us what we wanted to tackle in the session. Coincidentally, all of us were planning on doing something on a tech-related topic: learning, writing, or coding. I usually don’t have a problem working on my own and motivating myself (otherwise, it would be tough when you’re self-employed). Seeing this group of yet strangers working on something similar on their Saturday, however, immediately made me change my morale rating from three up to five out of five.

We went through three 30-minute blocks. Kristijan always gave us two minutes for preparation and evaluation, which we were allowed but not obligated to share, and set the timer for work. He also led the conversation about our experiences during breaks and in the debrief following the session. At least one other participant had experience with the Pomodoro technique, whereas another person meant they’re usually working in longer blocks. We also talked about a service called Focusmate that offers a similar format in a one-on-one setting.

I don’t think this is something I would do every day. Still, I can very much imagine doing it weekly to get some additional motivation, connect with people, and talk productivity.

There are various formats to describe data models. One of my favorite approaches is Linked Data based on RDF. That is why I based CloudObjects on this technology. My idea was to use RDF to describe APIs and the configuration of various application components. I quickly realized that a semantic web platform with built-in distribution and access controls has more use cases.

For a more approachable start, I’ve created a demonstration of CloudObjects Core using, wait for it, pizza! You can check out my latest post, “Pizza Time! Using CloudObjects Core for Domain Models”, on the CloudObjects Blog. As always, I’m happy for any feedback on the article!

Also, in case you didn’t know, I create developer content for third-party companies as a freelancer. If you liked the style and content of my tutorial linked above, hit me up, and we can discuss how I can create something similar for your product.

Yesterday, I made a little experiment; the first baby step into what is known as a #DigitalDetox or #DigitalSabbath. I switched off my laptop, phone, and other devices in the afternoon, and I didn’t switch on again until this morning. A while ago, I found a Digital Sabbath website that explains why turning of technology for some time is healthy and useful because most of the technology we use follows addictive design patterns. Those can be harmful if we’re no longer in control, and instead, the technology is in control of us. On top of that, because we have non-stop access to entertaining content, we rarely experience boredom. A bored mind is often a prerequisite to a creative process. The website poses it as a challenge to make it one day for three months. I haven’t signed up for it yet, because I’m not sure where it fits best into my week, but I’m planning to take breaks more often.

Apart from that website, I also watched a video on YouTube this week titled “How I Tricked My Brain To Like Doing Hard Things” that describes a similar phenomenon, and I can highly recommend that video. It suggests something that goes beyond the digital detox, a “dopamine detox”. The method includes not just refraining from technology but also things like junk food or offline pleasure activities. It was another motivation for me to try this.

Don’t get me wrong. I love technology and social media and entertainment, and everything else this modern world has to offer. I talked about my relationship with social media already in the first post this year. It might sound hypocritical for me, as a technologist, to advocate for less. At the same time, I believe that what they say, sometimes “less is more”, is accurate as well. I used my time off to play a bit on the piano and also finally start reading one of the books that were waiting for me on the shelf for a long time and made progress in another. There’s more time to do the things you always wanted to do if you don’t spend time mindlessly scrolling on Twitter or browsing Netflix without actually deciding to watch a show.

My primary programming language is PHP, which means that I am coding in something that 80% of web servers use and what 80% of developers hate. It is one of the languages with the worst reputation. Today, I read another piece trying to deal with the question “Why developers hate PHP”.

The article does an excellent job of explaining the origins of PHP. And it also shows the recent advancements and how much the language has improved. The author argues that many developers have made up their minds based on older versions of the language and have not updated their opinion in the light of new developments. Also, most widely deployed things are controversial, and it’s easier to hate on something everyone knows rather than something more obscure.

In the world of APIs, the choice of a programming language becomes less important. Different services can have various kinds of implementation details and communicate over standardized HTTP interfaces. If you are an API provider, you can build your backend in Python, Ruby, Go, Javascript, Rust, or whatever you prefer. You can even mix and match using microservices and internal APIs.

However, you have to be aware that the consumers of your APIs come with all sorts of languages and frameworks in which they will integrate your API. Your support and developer relationship teams will receive questions about all of them, and due to its popularity, PHP will be among them. In my opinion, no API program and developer portal are complete without code samples and tutorials covering PHP usage. If you offer SDKs, you need to have one for PHP.

So, if you are a Java shop that’s too “enterprisey” for PHP or a hip startup too cool to hire PHP developers, that’s where you can go to outsourcing. And guess what, I can help you. I code in PHP for almost two decades, and my current focus is creating developer content around APIs. I can also tap into the freelance talent pool to build content in all sorts of languages. Let’s talk about how I can support your customers from the PHP world.

I read Stephanie Morillo’s “The Developer’s Guide to Content Creation”, an e-book with a self-explanatory title. I can recommend it to everyone who’s getting into technical writing because it covers a lot of ground. In terms of its objective, it is similar to Lauren Lee’s “The Art Of Technical Writing” talk I wrote about last week, though it’s more extensive (obviously) and covers a few different areas.

Stephanie writes about defining your goals and generating content ideas, going through the planning, writing, and editing stages, talks about titles, call-to-actions, and resources, promoting content, and, finally, using analytics to iterate and improve.

For today, I want to focus on the first step, defining your goals. This post is inspired by the chapter in the book but contains additional thoughts and ideas from me. Writing and content creation can have many different purposes, and just creating something for a personal blog because you want to practice is a valid reason. Nevertheless, you have to think more strategically if you are a developer-focused company or are creating and sharing, for example, an open-source library with the world.

Every written piece of content, even your API reference, appears in search engines and thus is part of your marketing material. It may be the first part of your product someone sees. It doesn’t mean, however, that you must optimize everything for newbies or overinvest in SEO. There is a lot of value in creating content for advanced users of your product. Even documenting edge cases can pay off if it takes some load off your support.

Whenever you write something, think of your target audience. What do the developers know? Where are they in your funnel? Do you want to inspire them to start trying your product, or are they already sold and need some help? Often it is helpful to make up “personas”, which are fictional readers for whom you write. The most important thing to consider, though, is that you are not your target audience. You have already solved a problem that others still have, and you present your solution.

Also, think about your content strategy as part of the overall product strategy. For example, if you have an API with a wide range of applications, but your content only features use cases from a specific vertical, you will mainly attract developers from that vertical. Is that what you want?

Now that you have some things to consider when it comes to content marketing for developers, here’s my regular reminder that I’m available for hire for contract work. We can plan and create your developer content together. I’m looking forward to hearing from you!

Last night I joined the Vonage Developer Day live stream for a single presentation, Lauren Lee’s “The Art of Technical Writing”. Her talk’s objective was to motivate developers to write technical tutorials and provide them with the basics they need to get started. Lauren has an unusual background because she was a high school teacher before switching to a technical career, so this means she knows a thing or two about education.

The talk was incredibly fast-paced due to her passion and energy, so I had a hard time keeping up with writing notes, nevertheless I want to give you a little (subjective) summary.

Lauren says developer content should be instructional, non-assuming, timely, correct, and concise. The crucial points here are non-assuming because we often make wrong assumptions about what common knowledge entails and timely because technical content may get outdated soon. If you don’t know what to write about, “write the article you wish you found when you googled something.”

When it comes to creating tutorials, she suggests getting early feedback on an outline before starting to code and write. Then, implement the application and keep a journal or good commit comments that form the basis of your writing. After coding, move on to writing as soon as possible, so the memories of your challenges are still fresh. Edit later. Take time for the revisions and, again, get feedback.

A developer tutorial should start with an introduction, set a goal, explain the prerequisites, and then go through the necessary steps. It’s not required to document the whole codebase, just the essential parts. Include screenshots or animations. Put a summary at the end, and don’t worry about repetition; some of your readers reach here after skipping over other parts.

Some of Lauren’s general writing advice includes using a conversational tone without simplification words, inclusive language, and avoiding references that might be outdated soon (something I wrote about lately, too). And, of course, practice!

Once you’ve published your piece, share it loudly. Send it to people and look for cross-posting opportunities. Analytics tools are your friends to find out what works.

I enjoyed this talk. Though I have a lot of experience writing on this blog, the CloudObjects blog, and creating content for my clients, there were still some new or good aspects to hear about again, that will help me get better at my craft.

So, what are you waiting for? Go and create some amazing developer content! Or, if you don’t want to do it yourself, hire me for a contract.

One of the biggest and most unexpected news from the tech world this week was the acquisition of Keybase by Zoom. Video communications app Zoom is one of the big winners of the current COVID-19 pandemic but received criticism with regards to privacy and security. In contrast, Keybase has done a lot of exciting things in the realm of zero-knowledge, end-to-end encrypted tools for individuals and businesses alike, but appears stuck in their nerd and crypto niche.

I found out about the acquisition on Twitter, where a lot of people have negative attitudes and loudly proclaim deleting their Keybase accounts. The Keybase blog post doesn’t sound overly optimistic in terms of its future, and many expect the app to land in the Incredible Journey graveyard in the foreseeable future.

Selling their startup is a decision that I don’t assume any founder takes lightly, so I am very wary of accusing anybody of being a sell-out. At the same time, I am worried because every M&A activity decreases the number of independent players on the market, and loss of competition generally hurts consumers, so I always feel a little sad. A good counter-argument, however, is that we have a strong dominance of the so-called GAFAM - Google, Apple, Facebook, Amazon, and Microsoft. Two independent players teaming up stand a better chance against the behemoths.

I am cautiously optimistic here. Zoom’s biggest competitors are Microsoft (with both MS Teams and Skype) and Google (Meet), both of which are part of a business application suite. Keybase has a team product with team chat and file storage, all end-to-end encrypted. Zoom could merge with this to move beyond video calls and offer a full zero-knowledge collaboration suite for businesses. Also, even if it doesn’t play out like this, bringing encryption to mainstream Zoom is a huge win.

I don’t expect the Keybase app to shut down soon as I assume it’s not too costly to keep it up, and last but not least, the Stellar foundation might step up. We could even end up with an open-source Keybase server. Their client-side code is already open-source. Still, I’d love to hear more about their plans soon to get a bit of confidence before investing time and effort in using Keybase.

While going through older stuff saved in Pocket, I found a talk titled “Building A Content Marketing Machine” by Hiten Shah, which he gave at HeavyBit, a company accelerator targeting developer-focused startups. While the video is a few years old, I think there are a lot of good points made around content marketing for developers that still apply today.

If you look at the traffic for developer content such as blog posts, organic search is the primary source. Social media like Twitter is fantastic for engaging with developers but typically not a huge source of traffic. Hence, SEO (search engine optimization) is essential, but there are no shady tricks in SEO anymore. The only formula that works is to produce both quantity and quality and be patient. Content is a long game.

You should always be aware of your audience. Targeting developers at startups and CTOs at enterprises is entirely different. And you have to remember that the primary purpose of content is to provide something of value for them, not just, for example, show off your company culture.

Also, don’t just invest in content production, but also promotion. Influencer marketing works well for developers, so reach out to relevant people directly. Repurposing content in different formats, such as a blog post about a conference talk or a podcast, is worth it because you can increase the reach of your content without investing in something new every time.

Finally, the outsourcing of content production is possible. Hiten gave the example of Kissmetrics, who, at some point, had 99% of blog posts written by guest authors.

To summarize, you need both quantity and quality in technical content, tailored to your audience, and you can tap into external talents to create it. And guess what, I provide precisely this kind of service through my consulting business. Contact me to learn more!

There is a lot of buzz around “no-code” tools that empower people to build things without writing code. Website builders like Wix fall in this category, and so do IPaaS like IFTTT or Zapier. Makerpad is a community where people can learn how to launch a business with only those tools and without having to be or hire a developer. While I love and use some of those tools myself, they are also limited and don’t possess the full power of programming.

Anil Dash is the CEO of Glitch, a web-based IDE with a cloud-based runtime where people can write code and connect with a community of developers. He recently published an article on LinkedIn about a concept called “Yes Code”. Anil has similar sentiments about the potential of being able to code and believes that we should also empower people to learn that instead of just hiding the code behind the abstraction layers of “no-code” tools. He goes on about coding as a superpower and how it can help us build a better, “new human web” when we include more people in this process. I don’t want to repeat his points, so go and read his article.

For me, Anil’s thoughts are a good reminder of why I’m passionate about excellent API design and unique developer content. Yes, we need good material to teach the basics of programming, but we also need to make our APIs and SDKs and (open-source) libraries accessible and beginner-friendly. It is not only the right thing to do if you care about being inclusive, but it also makes good business sense to extend your audience and help a guy or girl building their next independent business on top of your API.

I can help you improve your API design to make it better for everyone, not just beginners, and I can create additional content to teach your API or developer product. Send me an email or fill out this form to learn more about my services.

It’s May 4th today. Happy Star Wars Day!

In case you didn’t know why this is Star Wars day, think of the famous quote from the movies: “may the force be with you”. Well, doesn’t “may the force …” sound a bit like “May, the fourth”? It’s a pop-cultural reference, and not everybody gets it. That made me think about whether or not to use cultural references in technical writing and developer content.

On the one hand, there is a particular set of famous cultural works that are associated with “nerds”, and being a software developer is considered being a part of the same (sub)culture. Developers can bond over shared interests in movies, music, etc. in the same way as they can bond (or playfully fight) over their favorite programming language or text editor. Fictional worlds provide engaging scenarios away from the mundane daily (home) office life, adding color and depth to sample code and tutorials. Why not take your first steps into the world of APIs with the Star Wars API?

On the other hand, referencing works from the Western male-dominated nerd culture could backfire and make women and people from different cultural backgrounds feel excluded. I firmly believe that writing code and participating in the API economy is for everyone. Hence, we should be accomodating to folks from all walks of life.

Additionally, an issue that might arise is that heavy use of references to commercial works of art could be considered copyright infringement. It is something especially larger companies should think of (and consult their legal department) before they lean on these works too heavily.

That said, are you looking for additional tutorials for your API, with or without cultural references? Check out my website for developer content production offers and talk to me about them. I am looking forward to hearing from you.

In my corner of the Internet (or dare I say “filter bubble”), I’ve seen a lot of recent conversations resurfacing the “garden vs. stream” metaphor for the web. There was also a virtual IndieWebCamp popup session about the topic, which I sadly only heard about after the fact.

To those unaware of the metaphor, its origin seems to be a 2015 keynote (or its transcript) by Mike Caulfield, “The Garden and the Stream: A Technopastoral”. It compares most of the current web to a stream where content primarily appears in chronological order. In contrast, the garden is a hyperlinked, timeless representation of connected content.

People running personal websites as blogs are turning to wikis as a way to represent information. Anne-Laure Le Cunff of NessLabs, who was one of the main motivations for me to try Roam to organize my thoughts and research, has started Mental Nodes as her “mind garden”. It is a site based on TiddlyWiki as the published counterpart of the private research notebook. The garden metaphor and “tend to your garden” expression, both apply to hyperlinked web content as much as they do to the mind itself.

It seems to me that many people are nostalgic about the pre-blog-era web, where individual homepages served as an informal outlet for their creators. However, I think there are good reasons that the stream dominates as the primary mechanism for content creation and consumption, especially in the mainstream (pun intended!).

While our human brains are capable of networked thinking, I believe that it is an art to connect the dots of multiple areas of your life and the world around you. It is even harder to dive into the networked thoughts of another person because there is no clear path. I’m not saying it’s impossible or disagree about its value, but it’s much harder than tapping into a stream or appending your current thoughts to said stream.

People love stories and storytelling. And by that, I don’t just mean fiction, but even the kind of stories that journalists create from real-life events and those that marketers use to sell us products. A story may require some background information, but it is a coherent piece of its own. Every story we hear or read adds to our mental model of the world, even if we don’t consciously make the connections, and yet if we don’t, we can still enjoy it in itself when it appears on the stream.

Every blog post, every tweet, everything we create can be considered a snapshot of our thoughts and ideas. These are, however, polished versions, not just raw dumps. It might be pretentious to call a post like this a story or even art. However, I hope it has some value, more than what I believe access to my notes in wiki-form could provide. And it is clear that it is a snapshot of myself in May 2020, and that adds relevant context in case my opinions evolve or change in the future.

Therefore, I’m unlikely to publish a mind garden for myself, but I’m happy to continue streaming stories to you.

It’s May 1st, the start of a new month! It’s also labor day, or worker’s day, or whatever you like to call it. I hope you enjoy your holiday despite lockdown measures and, if you go outside, keep the necessary social distance.

Last night I listened to an episode of the “The Future of Content” podcast 🎧 where Lorna Mitchell was the guest on the show. I don’t usually subscribe to this podcast, but because I know and follow Lorna, I discovered this episode.

It was a delightful 31-minute conversation, which I can recommend. I don’t want to summarize the entire episode, but I wanted to repeat a few significant points.

A lot of the episode dealt with the docs-as-code workflow. With docs-as-code, technical writers use tools like Markdown and Git to manage their content in a similar workflow as developers. That workflow appears to be an overall trend as it brings implementation and documentation closer together.

Additionally, it ties in well with two other aspects. One is reusability. Lorna stressed the importance of keeping the content and presentation separate. While this might seem obvious to developers (think HTML for structure, CSS for style), for documentarians working with WYSIWYG tools like Microsoft Word, it is a new concept. The huge advantage is that you can repurpose content in different ways, for example, between various conference talks, your website, a PDF whitepaper, and more.

The other aspect is, and that is specifically for APIs, the use of OpenAPI. Apart from a short “elevator pitch” from Lorna about how great OpenAPI is, the episode didn’t dive in too deep. But it reminded me of the unconference session I attended at the last virtual API the Docs event. In the course, we talked about how companies are doing exciting things with build pipelines that combine structured documentation (e.g., API references in OpenAPI) with Markdown files for more free-form documentation.

At the end of the episode, there was also a short conversation about Twitch streamers and how they explore new ways of explaining programming and technical concepts.

If you need assistance with your APIs, their documentation, and content production for developers, I think this is a great time to plug my freelance consulting business. You can learn more about my services and contact me through my website.

If you are a German and have been on the Internet for more than a few years, you probably remember studiVZ. The social network launched at a time when Facebook was still very new and only available to college students in the United States. In its first iteration, it looked much like Facebook, just red instead of blue. A leaked PHP error message indicated that one of the source files even had the name fakebook.php. The network later expanded to high school students (“schülerVZ”) and the general public (“meinVZ”) but had no chance against the global giant. The company was sold multiple times and became practically irrelevant.

All the more, I was surprised when I heard that the latest owner relaunched the network, now directly calling it “VZ” (VerZeichnis = directory). It’s a redesign from the old social network I knew, but it looks solid. There is no general newsfeed. All interactions happen in groups. That is in line with the prevailing social media trend of niche communities and “dark social” as people realize that everybody just broadcasting creates a lot of content that either overwhelms or is rendered invisible by the algorithms.

There is no sign of APIs and integrations for VZ yet and also no business model outside of advertisements. Their only selling point with regards to privacy is that the servers are physically located in Germany.

I signed up mostly because of nostalgia. I’m not sure if VZ has any chance but, if you know me, I have a lot of sympathy for everybody who doesn’t just accept the Facebook monopoly and tries to do something different.

This blog you’re reading right now exists since March 7, 2018. It is a hosted microblog on the micro.blog service run by Manton Reece. The service is a hybrid between blog (and podcast) hosting and a social network with a timeline. It launched on Kickstarter in January 2017 and opened doors later the same year. I supported the campaign with backer number #592.

I’ve blogged a bit, but I’m not a super active community member. Still, I enjoy listening to Micro Monday, the weekly podcast introducing people who blog on the site. Catching up on the two latest episodes this morning motivated me to write a bit about the history of my (micro)blog.

In my time online, I used to have a variety of different personal websites and blogs. Somehow I didn’t stick with most of it but started over a few times. Then, in 2012, a service called app.net was launched. It was what you could call a headless social network. The idea was that you had a centralized social graph and data storage, but you could use all sorts of apps and services to access it. It was an answer to the tendency of other social networks like Twitter restricting their APIs and driving people to their official apps. At the same time, I followed the IndieWeb movement, the idea of owning your content and primarily making it available on a domain name you control while also integrating with existing social networks. Eventually, I married both approaches and built an open-source software called phpADNSite. With phpADNSite, your content and interactions lived on app.net, but you could present them on your domain through a custom template. Your domain also connected app.net with the IndieWeb.

Unfortunately, app.net stopped further development in 2014. There was still an engaged community at the time trying to support the platform under the “ADNFuture” banner, but it didn’t help. In March 2017, the platform shut down for good. Luckily, I had already considered this scenario when building phpADNSite by implementing a backup feature that served my old app.net content as a static website after the shutdown. It just didn’t allow me to create and share anymore. So, for a while, I couldn’t publish new content.

Since I still liked the general idea of separating data storage and presentation, I considered a variety of different hosted DBaaS (database-as-a-service) or headless CMS (content management systems) as a replacement. Also, instead of a full application like phpADNSite, it could be served by a FaaS (function-as-a-service) serverless offering. In my mind, I dubbed this “cloud-native IndieWeb”. However, I couldn’t decide on one specific approach. I wanted to experiment with multiple, but I didn’t have the time. That’s when I concluded that, even though “selfdogfooding” is a central idea of the IndieWeb community, it didn’t make sense to have an outlet for writing the same place in which I would do coding experiments, as it made both activities dependent on each other.

One of the reasons why I signed up for the micro.blog crowdfunding in the first place was its unique, hybrid approach. It reminded me of my own. At the time of backing, I had no idea how I would use it. But eventually, I decided having a hosted blog on a service roughly following my ideals is a great approach. I don’t need to host my own and can still retain some control through my domain name.

I hope you enjoyed this little backstory of my blog, and I sincerely hope that I will find some more time to experiment more with IndieWeb technologies and the “cloud-native IndieWeb” approach.

Recently I heard a lot about a new software called Roam Research. According to its website, it is “a note-taking tool for networked thought”. Especially Anne-Laure Le Cunff of NessLabs seemed to be full of praise for the application. I still remember when Evernote launched and was described as “an extension of your brain”. But Roam seems to be the one fulfilling that promise because its structure is much more like a brain. I’ve used the tool for roughly two weeks now and wanted to write a summary of my experience and why and how I use it.

Generally, I do quite a bit of reading online, and I collect information that feels important to me from the articles I read, mostly by copying verbatim quotes. I used to copy those to Evernote, where I had notes for different topics in which I would collect those quotes and their source URLs. Titles of such notes could be something like “API Design”, “Developer Experience”, “Digital Transformation”, or “Climate Change”. And this is where the problems start. For example, what about an article that covers the impact of digital transformation on climate change? It should go in both notes. Instead, I could create a note for every external piece, of course, but then the only way to connect the thoughts would be to make extensive use of tagging, which I don’t use a lot in Evernote.

Roam is a web-based combination of a wiki and an outliner. Even though you also create notes or pages, Roam makes it very easy to link different pages together, inline using hashtags (#) or double brackets ([[ ]]). Every page is a hierarchical list of hypertext paragraphs, and you can link from different hierarchy levels. The application also shows you when you have used a term for which a page exists but not linked, so you can decide whether you want to connect the thoughts or not. It can also visualize your whole database as a graph. In Roam, it is not a problem to add new articles you read as a page on their own and then establish links to the other material you have read, which makes the whole thing more comfortable and more rewarding.

I have a wide array of interests. Even my primary professional area has many interconnected aspects if you look at an API lifecycle and all the factors of an API - design, implementation, security, etc. - and then look at developer experience and developer relations, which involve, for example, technical writing. Then, there are many other areas of interest from my, such as self-development, future of work, basic income, effective altruism, and environmental issues. I don’t see different interests as separated domains but rather as various aspects of a whole that can influence each other, and where unusual connections can appear.

There are links, for example, between the API economy and the future of work. However, the picture in my mind still feels incomplete, and I lack the language to describe how it all fits together and what it means. I will continue and try organizing my thoughts in Roam, and I’m confident it will help me complete my mental model.

If there’s anything negative I can say about Roam is that it’s quite new, so it’s not sure how it will develop. It doesn’t have an API (or integrations) of its own yet, something I believe is a minimum requirement for any SaaS product launching today. Still, you can import and export data. Also, it’s free to use with no pricing or published business model yet. I assume it will be a moderate monthly subscription, but it would be nice to know for sure.

Have you tried Roam already, and do you have any tips for me to make the most of it? Please let me know what you think! Thank you!

Security is an essential aspect of API design and implementation. And while implementing proper security measures can be hard, sometimes it’s the most basic stuff that goes wrong. The most recent APIsecurity.io newsletter was a good reminder of that.

A WordPress plugin, RankMath, introduced an API endpoint into a WordPress instance. And it added this endpoint without any authentication or authorization checks, leaving it open for the world. There are very few cases where an API can deliberately omit authentication for anonymous access, for example, when you provide access to data that is public anyway. But the default approach should always be to implement authentication and test that the endpoint rejects all unauthorized requests.

Another, even more fundamental problem, occurred with the Tapplock smart lock. The IoT device used unencrypted HTTP to communicate with its server. Nobody should use unencrypted HTTP anymore, and most definitely not for APIs.

The newsletter also mentioned “broken object-level authorization” vulnerabilities in both Tapplock and another smart device, TicTocTrack. These so-called BOLA problems occur when there is proper authentication in place, but the code doesn’t check authorization for every object. It is a hard problem, and it cannot be solved in API design or with OpenAPI descriptions, but your implementation code must prevent this. Once again, testing is your friend, and tests should not cover success cases but also those you want to fail, to make sure they actually fail.

At the very least, however, make sure you have authentication in place (you can specify that in OpenAPI) and always use HTTPS!

Last night I took part in the first virtual API the Docs edition, where I listened to two great talks, one by Leah R. Tucker of {your}APIs and one by Kristof van Tomme of Pronovix. The event took place via GoToMeeting, with discussions happening in parallel on Slack. There was also an unconference part with breakout sessions happening via Google Meet, but unfortunately, I had to leave after the talks so I couldn’t join them.

Leah talked about Designing a future-proof API program. She drew parallels between supply chains and large numbers of APIs in an organization, emphasizing the need for consistency in APIs. I liked how she approached it not just from a perspective of developer experience but also a more general brand experience. That might be the right way of putting it to get buy-in from non-technical management to invest in API design and build up a data steward team.

Kristof talked about Beyond API Spray & Pray - Devportals in Digital Transformation. He described two trends of digital transformation; the redefinition of closeness by replacing physical proximity with digital proximity. The second is market complexity, for which he referred to the Cynefin framework. APIs and developer portals can help in achieving transformation. Kristof also gave an overview of different types of developer portals and the role they play.

I enjoyed both talks and the Q&A that followed them. If you’re curious about the next event, you can register on the Eventbrite page and also join the new API the Docs Slack workspace.

Meetups, events, and conferences remain canceled. That affects API the Docs as well. Just a bit over a month ago, I wrote that I am volunteering on the speaker selection committee for their Portland conference and that I’m looking forward to attending the European editions in Cologne-Bonn and Brussels later this year. Portland is not happening, and neither is Cologne-Bonn. So far, Toronto in September and Brussels in November is still on, but it remains unclear how the global crisis unfolds. I hope that politicians lift strict lockdown measures or contact restrictions soon (maybe when we have enough face masks and privacy-friendly contract tracing apps). Still, I also feel that international conferences may not happen for an entire year. Once the series starts again, I’m happy to get back on speaker selection duty.

While the speaker committee has dissolved, the speakers still have an opportunity to give their presentations, just in a different format. Instead of an all-day conference, there will be bi-weekly smaller virtual API the Docs events with two talks each. The first event is on Wednesday, April 8th. However, at the time of writing, it is already at maximum capacity. Make sure you register for an upcoming event on the Eventbrite page and also join the new API the Docs Slack workspace where the social part of the events takes place and where you can learn more about the virtual API the Docs.

The new coronavirus is slowing down public life and the economy. At the same time, however, I am observing the public discussion expand, especially on Twitter, around two topics that I am very interested in, Remote Work and Universal Basic Income (UBI).

For us lucky knowledge workers who just need a computer and an Internet connection to get work done, remote work was always an option but its global impact was limited. For every successful distributed company, there’s another one believing in “butts in seats”. That may change as at least a fraction of the people who work remotely for the first time may find it works well for them and their employers or clients. They may use this option much more in the future, with all the benefits (i.e., fewer carbon emissions from commuting) that come with it.

On the other hand, there are and always will be people who get work done with their hands and bodies out in the real world. Some of them have to continue working, but others won’t. Direct support to their employers or a reduced tax burden does not reach all of them, especially self-employed workers in the “gig economy”. Handing out cash, on the other hand, does help everyone and may be a stimulus for the economy hit by the coronavirus. It is the right time to give a temporary UBI a try or at least some one-time cash transfers to collect more data points to prove that they work.

Along with my professional interests centered around APIs and developer experience, I have always been curious about the future of work. Every software developer and other person working in IT is in some way (maybe unconsciously) building that future. I believe that the API economy is one of the cornerstones of a world that Pieter Levels described as billions of self-employed makers and a few mega-corporations. We already have the latter, but for the former to thrive, we need UBI as a safety net. And they will be working remotely.

If there’s anything good coming from the current crisis, maybe it’s kickstarting the conversations about the essential topics for the future.

The API the Docs conference series is coming back to North America with an edition in Portland on 1st May, 2020, and I’m happy to make an announcement: Along with Laura Vass, Leona Campbell, and Yuki Zalkov, I’ll be part of the speaker selection committee.

The call-for-proposals (CFP) is still open until 29th February, after which the committee will review the submitted talks and choose the ones which we feel are most interesting and valuable to the community of API practitioners.

I’ve supported API the Docs in the past by being a part of the DevPortal Awards jury in the last two years, and this year I’m excited to volunteer for the community in a different role.

While I won’t be attending the Portland conference myself, I’m looking forward to meeting you at the two European editions in Cologne-Bonn and Brussels later this year.