Yesterday, I made a little experiment; the first baby step into what is known as a #DigitalDetox or #DigitalSabbath. I switched off my laptop, phone, and other devices in the afternoon, and I didn’t switch on again until this morning. A while ago, I found a Digital Sabbath website that explains why turning of technology for some time is healthy and useful because most of the technology we use follows addictive design patterns. Those can be harmful if we’re no longer in control, and instead, the technology is in control of us. On top of that, because we have non-stop access to entertaining content, we rarely experience boredom. A bored mind is often a prerequisite to a creative process. The website poses it as a challenge to make it one day for three months. I haven’t signed up for it yet, because I’m not sure where it fits best into my week, but I’m planning to take breaks more often.
Apart from that website, I also watched a video on YouTube this week titled “How I Tricked My Brain To Like Doing Hard Things” that describes a similar phenomenon, and I can highly recommend that video. It suggests something that goes beyond the digital detox, a “dopamine detox”. The method includes not just refraining from technology but also things like junk food or offline pleasure activities. It was another motivation for me to try this.
Don’t get me wrong. I love technology and social media and entertainment, and everything else this modern world has to offer. I talked about my relationship with social media already in the first post this year. It might sound hypocritical for me, as a technologist, to advocate for less. At the same time, I believe that what they say, sometimes “less is more”, is accurate as well. I used my time off to play a bit on the piano and also finally start reading one of the books that were waiting for me on the shelf for a long time and made progress in another. There’s more time to do the things you always wanted to do if you don’t spend time mindlessly scrolling on Twitter or browsing Netflix without actually deciding to watch a show.
My primary programming language is PHP, which means that I am coding in something that 80% of web servers use and what 80% of developers hate. It is one of the languages with the worst reputation. Today, I read another piece trying to deal with the question “Why developers hate PHP”.
The article does an excellent job of explaining the origins of PHP. And it also shows the recent advancements and how much the language has improved. The author argues that many developers have made up their minds based on older versions of the language and have not updated their opinion in the light of new developments. Also, most widely deployed things are controversial, and it’s easier to hate on something everyone knows rather than something more obscure.
In the world of APIs, the choice of a programming language becomes less important. Different services can have various kinds of implementation details and communicate over standardized HTTP interfaces. If you are an API provider, you can build your backend in Python, Ruby, Go, Javascript, Rust, or whatever you prefer. You can even mix and match using microservices and internal APIs.
However, you have to be aware that the consumers of your APIs come with all sorts of languages and frameworks in which they will integrate your API. Your support and developer relationship teams will receive questions about all of them, and due to its popularity, PHP will be among them. In my opinion, no API program and developer portal are complete without code samples and tutorials covering PHP usage. If you offer SDKs, you need to have one for PHP.
So, if you are a Java shop that’s too “enterprisey” for PHP or a hip startup too cool to hire PHP developers, that’s where you can go to outsourcing. And guess what, I can help you. I code in PHP for almost two decades, and my current focus is creating developer content around APIs. I can also tap into the freelance talent pool to build content in all sorts of languages. Let’s talk about how I can support your customers from the PHP world.
I read Stephanie Morillo’s “The Developer’s Guide to Content Creation”, an e-book with a self-explanatory title. I can recommend it to everyone who’s getting into technical writing because it covers a lot of ground. In terms of its objective, it is similar to Lauren Lee’s “The Art Of Technical Writing” talk I wrote about last week, though it’s more extensive (obviously) and covers a few different areas.
Stephanie writes about defining your goals and generating content ideas, going through the planning, writing, and editing stages, talks about titles, call-to-actions, and resources, promoting content, and, finally, using analytics to iterate and improve.
For today, I want to focus on the first step, defining your goals. This post is inspired by the chapter in the book but contains additional thoughts and ideas from me. Writing and content creation can have many different purposes, and just creating something for a personal blog because you want to practice is a valid reason. Nevertheless, you have to think more strategically if you are a developer-focused company or are creating and sharing, for example, an open-source library with the world.
Every written piece of content, even your API reference, appears in search engines and thus is part of your marketing material. It may be the first part of your product someone sees. It doesn’t mean, however, that you must optimize everything for newbies or overinvest in SEO. There is a lot of value in creating content for advanced users of your product. Even documenting edge cases can pay off if it takes some load off your support.
Whenever you write something, think of your target audience. What do the developers know? Where are they in your funnel? Do you want to inspire them to start trying your product, or are they already sold and need some help? Often it is helpful to make up “personas”, which are fictional readers for whom you write. The most important thing to consider, though, is that you are not your target audience. You have already solved a problem that others still have, and you present your solution.
Also, think about your content strategy as part of the overall product strategy. For example, if you have an API with a wide range of applications, but your content only features use cases from a specific vertical, you will mainly attract developers from that vertical. Is that what you want?
Now that you have some things to consider when it comes to content marketing for developers, here’s my regular reminder that I’m available for hire for contract work. We can plan and create your developer content together. I’m looking forward to hearing from you!
Last night I joined the Vonage Developer Day live stream for a single presentation, Lauren Lee’s “The Art of Technical Writing”. Her talk’s objective was to motivate developers to write technical tutorials and provide them with the basics they need to get started. Lauren has an unusual background because she was a high school teacher before switching to a technical career, so this means she knows a thing or two about education.
The talk was incredibly fast-paced due to her passion and energy, so I had a hard time keeping up with writing notes, nevertheless I want to give you a little (subjective) summary.
Lauren says developer content should be instructional, non-assuming, timely, correct, and concise. The crucial points here are non-assuming because we often make wrong assumptions about what common knowledge entails and timely because technical content may get outdated soon. If you don’t know what to write about, “write the article you wish you found when you googled something.”
When it comes to creating tutorials, she suggests getting early feedback on an outline before starting to code and write. Then, implement the application and keep a journal or good commit comments that form the basis of your writing. After coding, move on to writing as soon as possible, so the memories of your challenges are still fresh. Edit later. Take time for the revisions and, again, get feedback.
A developer tutorial should start with an introduction, set a goal, explain the prerequisites, and then go through the necessary steps. It’s not required to document the whole codebase, just the essential parts. Include screenshots or animations. Put a summary at the end, and don’t worry about repetition; some of your readers reach here after skipping over other parts.
Some of Lauren’s general writing advice includes using a conversational tone without simplification words, inclusive language, and avoiding references that might be outdated soon (something I wrote about lately, too). And, of course, practice!
Once you’ve published your piece, share it loudly. Send it to people and look for cross-posting opportunities. Analytics tools are your friends to find out what works.
I enjoyed this talk. Though I have a lot of experience writing on this blog, the CloudObjects blog, and creating content for my clients, there were still some new or good aspects to hear about again, that will help me get better at my craft.
So, what are you waiting for? Go and create some amazing developer content! Or, if you don’t want to do it yourself, hire me for a contract.
One of the biggest and most unexpected news from the tech world this week was the acquisition of Keybase by Zoom. Video communications app Zoom is one of the big winners of the current COVID-19 pandemic but received criticism with regards to privacy and security. In contrast, Keybase has done a lot of exciting things in the realm of zero-knowledge, end-to-end encrypted tools for individuals and businesses alike, but appears stuck in their nerd and crypto niche.
I found out about the acquisition on Twitter, where a lot of people have negative attitudes and loudly proclaim deleting their Keybase accounts. The Keybase blog post doesn’t sound overly optimistic in terms of its future, and many expect the app to land in the Incredible Journey graveyard in the foreseeable future.
Selling their startup is a decision that I don’t assume any founder takes lightly, so I am very wary of accusing anybody of being a sell-out. At the same time, I am worried because every M&A activity decreases the number of independent players on the market, and loss of competition generally hurts consumers, so I always feel a little sad. A good counter-argument, however, is that we have a strong dominance of the so-called GAFAM - Google, Apple, Facebook, Amazon, and Microsoft. Two independent players teaming up stand a better chance against the behemoths.
I am cautiously optimistic here. Zoom’s biggest competitors are Microsoft (with both MS Teams and Skype) and Google (Meet), both of which are part of a business application suite. Keybase has a team product with team chat and file storage, all end-to-end encrypted. Zoom could merge with this to move beyond video calls and offer a full zero-knowledge collaboration suite for businesses. Also, even if it doesn’t play out like this, bringing encryption to mainstream Zoom is a huge win.
I don’t expect the Keybase app to shut down soon as I assume it’s not too costly to keep it up, and last but not least, the Stellar foundation might step up. We could even end up with an open-source Keybase server. Their client-side code is already open-source. Still, I’d love to hear more about their plans soon to get a bit of confidence before investing time and effort in using Keybase.
While going through older stuff saved in Pocket, I found a talk titled “Building A Content Marketing Machine” by Hiten Shah, which he gave at HeavyBit, a company accelerator targeting developer-focused startups. While the video is a few years old, I think there are a lot of good points made around content marketing for developers that still apply today.
If you look at the traffic for developer content such as blog posts, organic search is the primary source. Social media like Twitter is fantastic for engaging with developers but typically not a huge source of traffic. Hence, SEO (search engine optimization) is essential, but there are no shady tricks in SEO anymore. The only formula that works is to produce both quantity and quality and be patient. Content is a long game.
You should always be aware of your audience. Targeting developers at startups and CTOs at enterprises is entirely different. And you have to remember that the primary purpose of content is to provide something of value for them, not just, for example, show off your company culture.
Also, don’t just invest in content production, but also promotion. Influencer marketing works well for developers, so reach out to relevant people directly. Repurposing content in different formats, such as a blog post about a conference talk or a podcast, is worth it because you can increase the reach of your content without investing in something new every time.
Finally, the outsourcing of content production is possible. Hiten gave the example of Kissmetrics, who, at some point, had 99% of blog posts written by guest authors.
To summarize, you need both quantity and quality in technical content, tailored to your audience, and you can tap into external talents to create it. And guess what, I provide precisely this kind of service through my consulting business. Contact me to learn more!
There is a lot of buzz around “no-code” tools that empower people to build things without writing code. Website builders like Wix fall in this category, and so do IPaaS like IFTTT or Zapier. Makerpad is a community where people can learn how to launch a business with only those tools and without having to be or hire a developer. While I love and use some of those tools myself, they are also limited and don’t possess the full power of programming.
Anil Dash is the CEO of Glitch, a web-based IDE with a cloud-based runtime where people can write code and connect with a community of developers. He recently published an article on LinkedIn about a concept called “Yes Code”. Anil has similar sentiments about the potential of being able to code and believes that we should also empower people to learn that instead of just hiding the code behind the abstraction layers of “no-code” tools. He goes on about coding as a superpower and how it can help us build a better, “new human web” when we include more people in this process. I don’t want to repeat his points, so go and read his article.
For me, Anil’s thoughts are a good reminder of why I’m passionate about excellent API design and unique developer content. Yes, we need good material to teach the basics of programming, but we also need to make our APIs and SDKs and (open-source) libraries accessible and beginner-friendly. It is not only the right thing to do if you care about being inclusive, but it also makes good business sense to extend your audience and help a guy or girl building their next independent business on top of your API.
I can help you improve your API design to make it better for everyone, not just beginners, and I can create additional content to teach your API or developer product. Send me an email or fill out this form to learn more about my services.
It’s May 4th today. Happy Star Wars Day!
In case you didn’t know why this is Star Wars day, think of the famous quote from the movies: “may the force be with you”. Well, doesn’t “may the force …” sound a bit like “May, the fourth”? It’s a pop-cultural reference, and not everybody gets it. That made me think about whether or not to use cultural references in technical writing and developer content.
On the one hand, there is a particular set of famous cultural works that are associated with “nerds”, and being a software developer is considered being a part of the same (sub)culture. Developers can bond over shared interests in movies, music, etc. in the same way as they can bond (or playfully fight) over their favorite programming language or text editor. Fictional worlds provide engaging scenarios away from the mundane daily (home) office life, adding color and depth to sample code and tutorials. Why not take your first steps into the world of APIs with the Star Wars API?
On the other hand, referencing works from the Western male-dominated nerd culture could backfire and make women and people from different cultural backgrounds feel excluded. I firmly believe that writing code and participating in the API economy is for everyone. Hence, we should be accomodating to folks from all walks of life.
Additionally, an issue that might arise is that heavy use of references to commercial works of art could be considered copyright infringement. It is something especially larger companies should think of (and consult their legal department) before they lean on these works too heavily.
That said, are you looking for additional tutorials for your API, with or without cultural references? Check out my website for developer content production offers and talk to me about them. I am looking forward to hearing from you.
In my corner of the Internet (or dare I say “filter bubble”), I’ve seen a lot of recent conversations resurfacing the “garden vs. stream” metaphor for the web. There was also a virtual IndieWebCamp popup session about the topic, which I sadly only heard about after the fact.
To those unaware of the metaphor, its origin seems to be a 2015 keynote (or its transcript) by Mike Caulfield, “The Garden and the Stream: A Technopastoral”. It compares most of the current web to a stream where content primarily appears in chronological order. In contrast, the garden is a hyperlinked, timeless representation of connected content.
People running personal websites as blogs are turning to wikis as a way to represent information. Anne-Laure Le Cunff of NessLabs, who was one of the main motivations for me to try Roam to organize my thoughts and research, has started Mental Nodes as her “mind garden”. It is a site based on TiddlyWiki as the published counterpart of the private research notebook. The garden metaphor and “tend to your garden” expression, both apply to hyperlinked web content as much as they do to the mind itself.
It seems to me that many people are nostalgic about the pre-blog-era web, where individual homepages served as an informal outlet for their creators. However, I think there are good reasons that the stream dominates as the primary mechanism for content creation and consumption, especially in the mainstream (pun intended!).
While our human brains are capable of networked thinking, I believe that it is an art to connect the dots of multiple areas of your life and the world around you. It is even harder to dive into the networked thoughts of another person because there is no clear path. I’m not saying it’s impossible or disagree about its value, but it’s much harder than tapping into a stream or appending your current thoughts to said stream.
People love stories and storytelling. And by that, I don’t just mean fiction, but even the kind of stories that journalists create from real-life events and those that marketers use to sell us products. A story may require some background information, but it is a coherent piece of its own. Every story we hear or read adds to our mental model of the world, even if we don’t consciously make the connections, and yet if we don’t, we can still enjoy it in itself when it appears on the stream.
Every blog post, every tweet, everything we create can be considered a snapshot of our thoughts and ideas. These are, however, polished versions, not just raw dumps. It might be pretentious to call a post like this a story or even art. However, I hope it has some value, more than what I believe access to my notes in wiki-form could provide. And it is clear that it is a snapshot of myself in May 2020, and that adds relevant context in case my opinions evolve or change in the future.
Therefore, I’m unlikely to publish a mind garden for myself, but I’m happy to continue streaming stories to you.
It’s May 1st, the start of a new month! It’s also labor day, or worker’s day, or whatever you like to call it. I hope you enjoy your holiday despite lockdown measures and, if you go outside, keep the necessary social distance.
Last night I listened to an episode of the “The Future of Content” podcast 🎧 where Lorna Mitchell was the guest on the show. I don’t usually subscribe to this podcast, but because I know and follow Lorna, I discovered this episode.
It was a delightful 31-minute conversation, which I can recommend. I don’t want to summarize the entire episode, but I wanted to repeat a few significant points.
A lot of the episode dealt with the docs-as-code workflow. With docs-as-code, technical writers use tools like Markdown and Git to manage their content in a similar workflow as developers. That workflow appears to be an overall trend as it brings implementation and documentation closer together.
Additionally, it ties in well with two other aspects. One is reusability. Lorna stressed the importance of keeping the content and presentation separate. While this might seem obvious to developers (think HTML for structure, CSS for style), for documentarians working with WYSIWYG tools like Microsoft Word, it is a new concept. The huge advantage is that you can repurpose content in different ways, for example, between various conference talks, your website, a PDF whitepaper, and more.
The other aspect is, and that is specifically for APIs, the use of OpenAPI. Apart from a short “elevator pitch” from Lorna about how great OpenAPI is, the episode didn’t dive in too deep. But it reminded me of the unconference session I attended at the last virtual API the Docs event. In the course, we talked about how companies are doing exciting things with build pipelines that combine structured documentation (e.g., API references in OpenAPI) with Markdown files for more free-form documentation.
At the end of the episode, there was also a short conversation about Twitch streamers and how they explore new ways of explaining programming and technical concepts.
If you need assistance with your APIs, their documentation, and content production for developers, I think this is a great time to plug my freelance consulting business. You can learn more about my services and contact me through my website.
If you are a German and have been on the Internet for more than a few years, you probably remember studiVZ. The social network launched at a time when Facebook was still very new and only available to college students in the United States. In its first iteration, it looked much like Facebook, just red instead of blue. A leaked PHP error message indicated that one of the source files even had the name fakebook.php. The network later expanded to high school students (“schülerVZ”) and the general public (“meinVZ”) but had no chance against the global giant. The company was sold multiple times and became practically irrelevant.
All the more, I was surprised when I heard that the latest owner relaunched the network, now directly calling it “VZ” (VerZeichnis = directory). It’s a redesign from the old social network I knew, but it looks solid. There is no general newsfeed. All interactions happen in groups. That is in line with the prevailing social media trend of niche communities and “dark social” as people realize that everybody just broadcasting creates a lot of content that either overwhelms or is rendered invisible by the algorithms.
There is no sign of APIs and integrations for VZ yet and also no business model outside of advertisements. Their only selling point with regards to privacy is that the servers are physically located in Germany.
I signed up mostly because of nostalgia. I’m not sure if VZ has any chance but, if you know me, I have a lot of sympathy for everybody who doesn’t just accept the Facebook monopoly and tries to do something different.
This blog you’re reading right now exists since March 7, 2018. It is a hosted microblog on the micro.blog service run by Manton Reece. The service is a hybrid between blog (and podcast) hosting and a social network with a timeline. It launched on Kickstarter in January 2017 and opened doors later the same year. I supported the campaign with backer number #592.
I’ve blogged a bit, but I’m not a super active community member. Still, I enjoy listening to Micro Monday, the weekly podcast introducing people who blog on the site. Catching up on the two latest episodes this morning motivated me to write a bit about the history of my (micro)blog.
In my time online, I used to have a variety of different personal websites and blogs. Somehow I didn’t stick with most of it but started over a few times. Then, in 2012, a service called app.net was launched. It was what you could call a headless social network. The idea was that you had a centralized social graph and data storage, but you could use all sorts of apps and services to access it. It was an answer to the tendency of other social networks like Twitter restricting their APIs and driving people to their official apps. At the same time, I followed the IndieWeb movement, the idea of owning your content and primarily making it available on a domain name you control while also integrating with existing social networks. Eventually, I married both approaches and built an open-source software called phpADNSite. With phpADNSite, your content and interactions lived on app.net, but you could present them on your domain through a custom template. Your domain also connected app.net with the IndieWeb.
Unfortunately, app.net stopped further development in 2014. There was still an engaged community at the time trying to support the platform under the “ADNFuture” banner, but it didn’t help. In March 2017, the platform shut down for good. Luckily, I had already considered this scenario when building phpADNSite by implementing a backup feature that served my old app.net content as a static website after the shutdown. It just didn’t allow me to create and share anymore. So, for a while, I couldn’t publish new content.
Since I still liked the general idea of separating data storage and presentation, I considered a variety of different hosted DBaaS (database-as-a-service) or headless CMS (content management systems) as a replacement. Also, instead of a full application like phpADNSite, it could be served by a FaaS (function-as-a-service) serverless offering. In my mind, I dubbed this “cloud-native IndieWeb”. However, I couldn’t decide on one specific approach. I wanted to experiment with multiple, but I didn’t have the time. That’s when I concluded that, even though “selfdogfooding” is a central idea of the IndieWeb community, it didn’t make sense to have an outlet for writing the same place in which I would do coding experiments, as it made both activities dependent on each other.
One of the reasons why I signed up for the micro.blog crowdfunding in the first place was its unique, hybrid approach. It reminded me of my own. At the time of backing, I had no idea how I would use it. But eventually, I decided having a hosted blog on a service roughly following my ideals is a great approach. I don’t need to host my own and can still retain some control through my domain name.
I hope you enjoyed this little backstory of my blog, and I sincerely hope that I will find some more time to experiment more with IndieWeb technologies and the “cloud-native IndieWeb” approach.
Recently I heard a lot about a new software called Roam Research. According to its website, it is “a note-taking tool for networked thought”. Especially Anne-Laure Le Cunff of NessLabs seemed to be full of praise for the application. I still remember when Evernote launched and was described as “an extension of your brain”. But Roam seems to be the one fulfilling that promise because its structure is much more like a brain. I’ve used the tool for roughly two weeks now and wanted to write a summary of my experience and why and how I use it.
Generally, I do quite a bit of reading online, and I collect information that feels important to me from the articles I read, mostly by copying verbatim quotes. I used to copy those to Evernote, where I had notes for different topics in which I would collect those quotes and their source URLs. Titles of such notes could be something like “API Design”, “Developer Experience”, “Digital Transformation”, or “Climate Change”. And this is where the problems start. For example, what about an article that covers the impact of digital transformation on climate change? It should go in both notes. Instead, I could create a note for every external piece, of course, but then the only way to connect the thoughts would be to make extensive use of tagging, which I don’t use a lot in Evernote.
Roam is a web-based combination of a wiki and an outliner. Even though you also create notes or pages, Roam makes it very easy to link different pages together, inline using hashtags (#) or double brackets ([[ ]]). Every page is a hierarchical list of hypertext paragraphs, and you can link from different hierarchy levels. The application also shows you when you have used a term for which a page exists but not linked, so you can decide whether you want to connect the thoughts or not. It can also visualize your whole database as a graph. In Roam, it is not a problem to add new articles you read as a page on their own and then establish links to the other material you have read, which makes the whole thing more comfortable and more rewarding.
I have a wide array of interests. Even my primary professional area has many interconnected aspects if you look at an API lifecycle and all the factors of an API - design, implementation, security, etc. - and then look at developer experience and developer relations, which involve, for example, technical writing. Then, there are many other areas of interest from my, such as self-development, future of work, basic income, effective altruism, and environmental issues. I don’t see different interests as separated domains but rather as various aspects of a whole that can influence each other, and where unusual connections can appear.
There are links, for example, between the API economy and the future of work. However, the picture in my mind still feels incomplete, and I lack the language to describe how it all fits together and what it means. I will continue and try organizing my thoughts in Roam, and I’m confident it will help me complete my mental model.
If there’s anything negative I can say about Roam is that it’s quite new, so it’s not sure how it will develop. It doesn’t have an API (or integrations) of its own yet, something I believe is a minimum requirement for any SaaS product launching today. Still, you can import and export data. Also, it’s free to use with no pricing or published business model yet. I assume it will be a moderate monthly subscription, but it would be nice to know for sure.
Have you tried Roam already, and do you have any tips for me to make the most of it? Please let me know what you think! Thank you!
Security is an essential aspect of API design and implementation. And while implementing proper security measures can be hard, sometimes it’s the most basic stuff that goes wrong. The most recent APIsecurity.io newsletter was a good reminder of that.
A WordPress plugin, RankMath, introduced an API endpoint into a WordPress instance. And it added this endpoint without any authentication or authorization checks, leaving it open for the world. There are very few cases where an API can deliberately omit authentication for anonymous access, for example, when you provide access to data that is public anyway. But the default approach should always be to implement authentication and test that the endpoint rejects all unauthorized requests.
Another, even more fundamental problem, occurred with the Tapplock smart lock. The IoT device used unencrypted HTTP to communicate with its server. Nobody should use unencrypted HTTP anymore, and most definitely not for APIs.
The newsletter also mentioned “broken object-level authorization” vulnerabilities in both Tapplock and another smart device, TicTocTrack. These so-called BOLA problems occur when there is proper authentication in place, but the code doesn’t check authorization for every object. It is a hard problem, and it cannot be solved in API design or with OpenAPI descriptions, but your implementation code must prevent this. Once again, testing is your friend, and tests should not cover success cases but also those you want to fail, to make sure they actually fail.
At the very least, however, make sure you have authentication in place (you can specify that in OpenAPI) and always use HTTPS!
Last night I took part in the first virtual API the Docs edition, where I listened to two great talks, one by Leah R. Tucker of {your}APIs and one by Kristof van Tomme of Pronovix. The event took place via GoToMeeting, with discussions happening in parallel on Slack. There was also an unconference part with breakout sessions happening via Google Meet, but unfortunately, I had to leave after the talks so I couldn’t join them.
Leah talked about Designing a future-proof API program. She drew parallels between supply chains and large numbers of APIs in an organization, emphasizing the need for consistency in APIs. I liked how she approached it not just from a perspective of developer experience but also a more general brand experience. That might be the right way of putting it to get buy-in from non-technical management to invest in API design and build up a data steward team.
Kristof talked about Beyond API Spray & Pray - Devportals in Digital Transformation. He described two trends of digital transformation; the redefinition of closeness by replacing physical proximity with digital proximity. The second is market complexity, for which he referred to the Cynefin framework. APIs and developer portals can help in achieving transformation. Kristof also gave an overview of different types of developer portals and the role they play.
I enjoyed both talks and the Q&A that followed them. If you’re curious about the next event, you can register on the Eventbrite page and also join the new API the Docs Slack workspace.
Meetups, events, and conferences remain canceled. That affects API the Docs as well. Just a bit over a month ago, I wrote that I am volunteering on the speaker selection committee for their Portland conference and that I’m looking forward to attending the European editions in Cologne-Bonn and Brussels later this year. Portland is not happening, and neither is Cologne-Bonn. So far, Toronto in September and Brussels in November is still on, but it remains unclear how the global crisis unfolds. I hope that politicians lift strict lockdown measures or contact restrictions soon (maybe when we have enough face masks and privacy-friendly contract tracing apps). Still, I also feel that international conferences may not happen for an entire year. Once the series starts again, I’m happy to get back on speaker selection duty.
While the speaker committee has dissolved, the speakers still have an opportunity to give their presentations, just in a different format. Instead of an all-day conference, there will be bi-weekly smaller virtual API the Docs events with two talks each. The first event is on Wednesday, April 8th. However, at the time of writing, it is already at maximum capacity. Make sure you register for an upcoming event on the Eventbrite page and also join the new API the Docs Slack workspace where the social part of the events takes place and where you can learn more about the virtual API the Docs.
The new coronavirus is slowing down public life and the economy. At the same time, however, I am observing the public discussion expand, especially on Twitter, around two topics that I am very interested in, Remote Work and Universal Basic Income (UBI).
For us lucky knowledge workers who just need a computer and an Internet connection to get work done, remote work was always an option but its global impact was limited. For every successful distributed company, there’s another one believing in “butts in seats”. That may change as at least a fraction of the people who work remotely for the first time may find it works well for them and their employers or clients. They may use this option much more in the future, with all the benefits (i.e., fewer carbon emissions from commuting) that come with it.
On the other hand, there are and always will be people who get work done with their hands and bodies out in the real world. Some of them have to continue working, but others won’t. Direct support to their employers or a reduced tax burden does not reach all of them, especially self-employed workers in the “gig economy”. Handing out cash, on the other hand, does help everyone and may be a stimulus for the economy hit by the coronavirus. It is the right time to give a temporary UBI a try or at least some one-time cash transfers to collect more data points to prove that they work.
Along with my professional interests centered around APIs and developer experience, I have always been curious about the future of work. Every software developer and other person working in IT is in some way (maybe unconsciously) building that future. I believe that the API economy is one of the cornerstones of a world that Pieter Levels described as billions of self-employed makers and a few mega-corporations. We already have the latter, but for the former to thrive, we need UBI as a safety net. And they will be working remotely.
If there’s anything good coming from the current crisis, maybe it’s kickstarting the conversations about the essential topics for the future.
The API the Docs conference series is coming back to North America with an edition in Portland on 1st May, 2020, and I’m happy to make an announcement: Along with Laura Vass, Leona Campbell, and Yuki Zalkov, I’ll be part of the speaker selection committee.
The call-for-proposals (CFP) is still open until 29th February, after which the committee will review the submitted talks and choose the ones which we feel are most interesting and valuable to the community of API practitioners.
I’ve supported API the Docs in the past by being a part of the DevPortal Awards jury in the last two years, and this year I’m excited to volunteer for the community in a different role.
While I won’t be attending the Portland conference myself, I’m looking forward to meeting you at the two European editions in Cologne-Bonn and Brussels later this year.
Let’s indulge a bit in nostalgia this weekend. I just remembered one of the websites that I used to frequent a lot around 15-20 years ago. The site was called klamm.de, and it was a German paid portal site. Or should I say, it is because if you follow the link, you might see that the site still exists. It almost looks like and has the features that were developed in its early years after the inauguration in 1999.
At the time, “getting paid for looking at ads” was the latest fad with paid email promotions, reviews, and even “surf bars” which would continuously show rotating banner ads next to your browser. All with sophisticated multi-level affiliate programs to make sure you’d invite your friends. And late teenage me was much more curious about the ideas and making some money (though I never made anything substantial) rather than being critical about advertising and the privacy-invading technology behind it as I am today.
Anyway, klamm.de was less about the earnings, but more about the community - the so-called “klammunity” and I spent quite a bit of my time on the forums of the site. Also, I assume that the site was responsible for my interest in APIs that drives my work today. How so?
At one time, klamm.de introduced “Lose” (lottery tickets) as its virtual currency, which users could bet to win prizes. At the same time, they could be traded between users. And, to drive this process, site owner Lukas Klamm (with whom I coincidentally share the first name) created an API called ExportForce. And I remember the first thing I did. I took a Javascript-based roulette game that I created as part of my high school computer science class. Then, I hooked it up to the API so that you could win “klamm Lose” playing roulette.
Of course, it was a stupid idea, because the game ran on the client and would report results to the server, so you could easily cheat. Still, it kicked off other hobby developers in the klammunity to build things around the API. And I learned a lot from it, too.
It’s interesting to see some of the “paid4” sites still around, even though earnings are minuscule, and we’re already annoyed by the advertisements we don’t get also paid for. I deleted my klamm.de account after not using it for a few years, but I’d love to log in again and take a trip back in time.
Last week, I’ve published a release of phpMAE and published an announcement post and tutorial on the CloudObjects blog. This week, as a follow-up, I have written a little about the background of the breaking changes in my open-source PHP-based sandbox for serverless/Function-as-a-Service (FaaS) development.
I finally managed to push a new release for phpMAE, my open-source PHP-based sandbox for serverless/Function-as-a-Service (FaaS) development, and (experimental) hosted service, which is a part of CloudObjects. For this release, I’ve updated the Getting Started tutorial and published it on the CloudObjects Blog just now. I’d be happy if you give it a try!
This is my first blog post in 2020, so first of all: Happy New Year! 🎇
The beginning of a new number on the calendar is a good time for some self-reflection. Among other things, I have thought about my relationship with social media again. Just as many others who spend a great deal of time on the Internet, it’s a sort of love-hate relationship. On the one hand, I enjoy the power of social media to connect people. On the other hand, it’s kind of addictive and can lead to mindless scrolling, which can be a huge timesink and makes you feel unhappy.
For Facebook, I’ve reenabled the Disable Facebook News Feed Firefox extension, which I used before but disabled at some point.
For Twitter, I’ve taken a little inspiration from Glitch CEO and blogger Anil Dash, who wrote about cleaning up his Twitter feed for the beginning of the new year. The post is from 2018, but he set off a tweet indicating he did the same thing this year.
I couldn’t convince myself to be as radical as Anil, so I used Tokimeki Unfollow instead. The application is inspired by Marie Kondo and works by showing you each of your followings with their latest tweets one by one and asks whether their latest tweets “still spark joy”. You can then choose to either unfollow or keep them. The process is also comparable to swiping through Tinder and similar apps.
I unfollowed inactive accounts, those whose tweet frequency is too high and those where I can’t remember why I started following them. I kept friends and people I’ve met in person or interacted with lately. It wasn’t a vast purge, but at least I got down from 492 to 302. My Twitter feed feels different and less overwhelming now.
For other networks, I haven’t made any changes.