Lukas Rosenstock's Blog

Last night I joined the Vonage Developer Day live stream for a single presentation, Lauren Lee’s “The Art of Technical Writing”. Her talk’s objective was to motivate developers to write technical tutorials and provide them with the basics they need to get started. Lauren has an unusual background because she was a high school teacher before switching to a technical career, so this means she knows a thing or two about education.

The talk was incredibly fast-paced due to her passion and energy, so I had a hard time keeping up with writing notes, nevertheless I want to give you a little (subjective) summary.

Lauren says developer content should be instructional, non-assuming, timely, correct, and concise. The crucial points here are non-assuming because we often make wrong assumptions about what common knowledge entails and timely because technical content may get outdated soon. If you don’t know what to write about, “write the article you wish you found when you googled something.”

When it comes to creating tutorials, she suggests getting early feedback on an outline before starting to code and write. Then, implement the application and keep a journal or good commit comments that form the basis of your writing. After coding, move on to writing as soon as possible, so the memories of your challenges are still fresh. Edit later. Take time for the revisions and, again, get feedback.

A developer tutorial should start with an introduction, set a goal, explain the prerequisites, and then go through the necessary steps. It’s not required to document the whole codebase, just the essential parts. Include screenshots or animations. Put a summary at the end, and don’t worry about repetition; some of your readers reach here after skipping over other parts.

Some of Lauren’s general writing advice includes using a conversational tone without simplification words, inclusive language, and avoiding references that might be outdated soon (something I wrote about lately, too). And, of course, practice!

Once you’ve published your piece, share it loudly. Send it to people and look for cross-posting opportunities. Analytics tools are your friends to find out what works.

I enjoyed this talk. Though I have a lot of experience writing on this blog, the CloudObjects blog, and creating content for my clients, there were still some new or good aspects to hear about again, that will help me get better at my craft.

So, what are you waiting for? Go and create some amazing developer content! Or, if you don’t want to do it yourself, hire me for a contract.

One of the biggest and most unexpected news from the tech world this week was the acquisition of Keybase by Zoom. Video communications app Zoom is one of the big winners of the current COVID-19 pandemic but received criticism with regards to privacy and security. In contrast, Keybase has done a lot of exciting things in the realm of zero-knowledge, end-to-end encrypted tools for individuals and businesses alike, but appears stuck in their nerd and crypto niche.

I found out about the acquisition on Twitter, where a lot of people have negative attitudes and loudly proclaim deleting their Keybase accounts. The Keybase blog post doesn’t sound overly optimistic in terms of its future, and many expect the app to land in the Incredible Journey graveyard in the foreseeable future.

Selling their startup is a decision that I don’t assume any founder takes lightly, so I am very wary of accusing anybody of being a sell-out. At the same time, I am worried because every M&A activity decreases the number of independent players on the market, and loss of competition generally hurts consumers, so I always feel a little sad. A good counter-argument, however, is that we have a strong dominance of the so-called GAFAM - Google, Apple, Facebook, Amazon, and Microsoft. Two independent players teaming up stand a better chance against the behemoths.

I am cautiously optimistic here. Zoom’s biggest competitors are Microsoft (with both MS Teams and Skype) and Google (Meet), both of which are part of a business application suite. Keybase has a team product with team chat and file storage, all end-to-end encrypted. Zoom could merge with this to move beyond video calls and offer a full zero-knowledge collaboration suite for businesses. Also, even if it doesn’t play out like this, bringing encryption to mainstream Zoom is a huge win.

I don’t expect the Keybase app to shut down soon as I assume it’s not too costly to keep it up, and last but not least, the Stellar foundation might step up. We could even end up with an open-source Keybase server. Their client-side code is already open-source. Still, I’d love to hear more about their plans soon to get a bit of confidence before investing time and effort in using Keybase.

While going through older stuff saved in Pocket, I found a talk titled “Building A Content Marketing Machine” by Hiten Shah, which he gave at HeavyBit, a company accelerator targeting developer-focused startups. While the video is a few years old, I think there are a lot of good points made around content marketing for developers that still apply today.

If you look at the traffic for developer content such as blog posts, organic search is the primary source. Social media like Twitter is fantastic for engaging with developers but typically not a huge source of traffic. Hence, SEO (search engine optimization) is essential, but there are no shady tricks in SEO anymore. The only formula that works is to produce both quantity and quality and be patient. Content is a long game.

You should always be aware of your audience. Targeting developers at startups and CTOs at enterprises is entirely different. And you have to remember that the primary purpose of content is to provide something of value for them, not just, for example, show off your company culture.

Also, don’t just invest in content production, but also promotion. Influencer marketing works well for developers, so reach out to relevant people directly. Repurposing content in different formats, such as a blog post about a conference talk or a podcast, is worth it because you can increase the reach of your content without investing in something new every time.

Finally, the outsourcing of content production is possible. Hiten gave the example of Kissmetrics, who, at some point, had 99% of blog posts written by guest authors.

To summarize, you need both quantity and quality in technical content, tailored to your audience, and you can tap into external talents to create it. And guess what, I provide precisely this kind of service through my consulting business. Contact me to learn more!

There is a lot of buzz around “no-code” tools that empower people to build things without writing code. Website builders like Wix fall in this category, and so do IPaaS like IFTTT or Zapier. Makerpad is a community where people can learn how to launch a business with only those tools and without having to be or hire a developer. While I love and use some of those tools myself, they are also limited and don’t possess the full power of programming.

Anil Dash is the CEO of Glitch, a web-based IDE with a cloud-based runtime where people can write code and connect with a community of developers. He recently published an article on LinkedIn about a concept called “Yes Code”. Anil has similar sentiments about the potential of being able to code and believes that we should also empower people to learn that instead of just hiding the code behind the abstraction layers of “no-code” tools. He goes on about coding as a superpower and how it can help us build a better, “new human web” when we include more people in this process. I don’t want to repeat his points, so go and read his article.

For me, Anil’s thoughts are a good reminder of why I’m passionate about excellent API design and unique developer content. Yes, we need good material to teach the basics of programming, but we also need to make our APIs and SDKs and (open-source) libraries accessible and beginner-friendly. It is not only the right thing to do if you care about being inclusive, but it also makes good business sense to extend your audience and help a guy or girl building their next independent business on top of your API.

I can help you improve your API design to make it better for everyone, not just beginners, and I can create additional content to teach your API or developer product. Send me an email or fill out this form to learn more about my services.

It’s May 4th today. Happy Star Wars Day!

In case you didn’t know why this is Star Wars day, think of the famous quote from the movies: “may the force be with you”. Well, doesn’t “may the force …” sound a bit like “May, the fourth”? It’s a pop-cultural reference, and not everybody gets it. That made me think about whether or not to use cultural references in technical writing and developer content.

On the one hand, there is a particular set of famous cultural works that are associated with “nerds”, and being a software developer is considered being a part of the same (sub)culture. Developers can bond over shared interests in movies, music, etc. in the same way as they can bond (or playfully fight) over their favorite programming language or text editor. Fictional worlds provide engaging scenarios away from the mundane daily (home) office life, adding color and depth to sample code and tutorials. Why not take your first steps into the world of APIs with the Star Wars API?

On the other hand, referencing works from the Western male-dominated nerd culture could backfire and make women and people from different cultural backgrounds feel excluded. I firmly believe that writing code and participating in the API economy is for everyone. Hence, we should be accomodating to folks from all walks of life.

Additionally, an issue that might arise is that heavy use of references to commercial works of art could be considered copyright infringement. It is something especially larger companies should think of (and consult their legal department) before they lean on these works too heavily.

That said, are you looking for additional tutorials for your API, with or without cultural references? Check out my website for developer content production offers and talk to me about them. I am looking forward to hearing from you.

In my corner of the Internet (or dare I say “filter bubble”), I’ve seen a lot of recent conversations resurfacing the “garden vs. stream” metaphor for the web. There was also a virtual IndieWebCamp popup session about the topic, which I sadly only heard about after the fact.

To those unaware of the metaphor, its origin seems to be a 2015 keynote (or its transcript) by Mike Caulfield, “The Garden and the Stream: A Technopastoral”. It compares most of the current web to a stream where content primarily appears in chronological order. In contrast, the garden is a hyperlinked, timeless representation of connected content.

People running personal websites as blogs are turning to wikis as a way to represent information. Anne-Laure Le Cunff of NessLabs, who was one of the main motivations for me to try Roam to organize my thoughts and research, has started Mental Nodes as her “mind garden”. It is a site based on TiddlyWiki as the published counterpart of the private research notebook. The garden metaphor and “tend to your garden” expression, both apply to hyperlinked web content as much as they do to the mind itself.

It seems to me that many people are nostalgic about the pre-blog-era web, where individual homepages served as an informal outlet for their creators. However, I think there are good reasons that the stream dominates as the primary mechanism for content creation and consumption, especially in the mainstream (pun intended!).

While our human brains are capable of networked thinking, I believe that it is an art to connect the dots of multiple areas of your life and the world around you. It is even harder to dive into the networked thoughts of another person because there is no clear path. I’m not saying it’s impossible or disagree about its value, but it’s much harder than tapping into a stream or appending your current thoughts to said stream.

People love stories and storytelling. And by that, I don’t just mean fiction, but even the kind of stories that journalists create from real-life events and those that marketers use to sell us products. A story may require some background information, but it is a coherent piece of its own. Every story we hear or read adds to our mental model of the world, even if we don’t consciously make the connections, and yet if we don’t, we can still enjoy it in itself when it appears on the stream.

Every blog post, every tweet, everything we create can be considered a snapshot of our thoughts and ideas. These are, however, polished versions, not just raw dumps. It might be pretentious to call a post like this a story or even art. However, I hope it has some value, more than what I believe access to my notes in wiki-form could provide. And it is clear that it is a snapshot of myself in May 2020, and that adds relevant context in case my opinions evolve or change in the future.

Therefore, I’m unlikely to publish a mind garden for myself, but I’m happy to continue streaming stories to you.

It’s May 1st, the start of a new month! It’s also labor day, or worker’s day, or whatever you like to call it. I hope you enjoy your holiday despite lockdown measures and, if you go outside, keep the necessary social distance.

Last night I listened to an episode of the “The Future of Content” podcast 🎧 where Lorna Mitchell was the guest on the show. I don’t usually subscribe to this podcast, but because I know and follow Lorna, I discovered this episode.

It was a delightful 31-minute conversation, which I can recommend. I don’t want to summarize the entire episode, but I wanted to repeat a few significant points.

A lot of the episode dealt with the docs-as-code workflow. With docs-as-code, technical writers use tools like Markdown and Git to manage their content in a similar workflow as developers. That workflow appears to be an overall trend as it brings implementation and documentation closer together.

Additionally, it ties in well with two other aspects. One is reusability. Lorna stressed the importance of keeping the content and presentation separate. While this might seem obvious to developers (think HTML for structure, CSS for style), for documentarians working with WYSIWYG tools like Microsoft Word, it is a new concept. The huge advantage is that you can repurpose content in different ways, for example, between various conference talks, your website, a PDF whitepaper, and more.

The other aspect is, and that is specifically for APIs, the use of OpenAPI. Apart from a short “elevator pitch” from Lorna about how great OpenAPI is, the episode didn’t dive in too deep. But it reminded me of the unconference session I attended at the last virtual API the Docs event. In the course, we talked about how companies are doing exciting things with build pipelines that combine structured documentation (e.g., API references in OpenAPI) with Markdown files for more free-form documentation.

At the end of the episode, there was also a short conversation about Twitch streamers and how they explore new ways of explaining programming and technical concepts.

If you need assistance with your APIs, their documentation, and content production for developers, I think this is a great time to plug my freelance consulting business. You can learn more about my services and contact me through my website.

If you are a German and have been on the Internet for more than a few years, you probably remember studiVZ. The social network launched at a time when Facebook was still very new and only available to college students in the United States. In its first iteration, it looked much like Facebook, just red instead of blue. A leaked PHP error message indicated that one of the source files even had the name fakebook.php. The network later expanded to high school students (“schülerVZ”) and the general public (“meinVZ”) but had no chance against the global giant. The company was sold multiple times and became practically irrelevant.

All the more, I was surprised when I heard that the latest owner relaunched the network, now directly calling it “VZ” (VerZeichnis = directory). It’s a redesign from the old social network I knew, but it looks solid. There is no general newsfeed. All interactions happen in groups. That is in line with the prevailing social media trend of niche communities and “dark social” as people realize that everybody just broadcasting creates a lot of content that either overwhelms or is rendered invisible by the algorithms.

There is no sign of APIs and integrations for VZ yet and also no business model outside of advertisements. Their only selling point with regards to privacy is that the servers are physically located in Germany.

I signed up mostly because of nostalgia. I’m not sure if VZ has any chance but, if you know me, I have a lot of sympathy for everybody who doesn’t just accept the Facebook monopoly and tries to do something different.

This blog you’re reading right now exists since March 7, 2018. It is a hosted microblog on the micro.blog service run by Manton Reece. The service is a hybrid between blog (and podcast) hosting and a social network with a timeline. It launched on Kickstarter in January 2017 and opened doors later the same year. I supported the campaign with backer number #592.

I’ve blogged a bit, but I’m not a super active community member. Still, I enjoy listening to Micro Monday, the weekly podcast introducing people who blog on the site. Catching up on the two latest episodes this morning motivated me to write a bit about the history of my (micro)blog.

In my time online, I used to have a variety of different personal websites and blogs. Somehow I didn’t stick with most of it but started over a few times. Then, in 2012, a service called app.net was launched. It was what you could call a headless social network. The idea was that you had a centralized social graph and data storage, but you could use all sorts of apps and services to access it. It was an answer to the tendency of other social networks like Twitter restricting their APIs and driving people to their official apps. At the same time, I followed the IndieWeb movement, the idea of owning your content and primarily making it available on a domain name you control while also integrating with existing social networks. Eventually, I married both approaches and built an open-source software called phpADNSite. With phpADNSite, your content and interactions lived on app.net, but you could present them on your domain through a custom template. Your domain also connected app.net with the IndieWeb.

Unfortunately, app.net stopped further development in 2014. There was still an engaged community at the time trying to support the platform under the “ADNFuture” banner, but it didn’t help. In March 2017, the platform shut down for good. Luckily, I had already considered this scenario when building phpADNSite by implementing a backup feature that served my old app.net content as a static website after the shutdown. It just didn’t allow me to create and share anymore. So, for a while, I couldn’t publish new content.

Since I still liked the general idea of separating data storage and presentation, I considered a variety of different hosted DBaaS (database-as-a-service) or headless CMS (content management systems) as a replacement. Also, instead of a full application like phpADNSite, it could be served by a FaaS (function-as-a-service) serverless offering. In my mind, I dubbed this “cloud-native IndieWeb”. However, I couldn’t decide on one specific approach. I wanted to experiment with multiple, but I didn’t have the time. That’s when I concluded that, even though “selfdogfooding” is a central idea of the IndieWeb community, it didn’t make sense to have an outlet for writing the same place in which I would do coding experiments, as it made both activities dependent on each other.

One of the reasons why I signed up for the micro.blog crowdfunding in the first place was its unique, hybrid approach. It reminded me of my own. At the time of backing, I had no idea how I would use it. But eventually, I decided having a hosted blog on a service roughly following my ideals is a great approach. I don’t need to host my own and can still retain some control through my domain name.

I hope you enjoyed this little backstory of my blog, and I sincerely hope that I will find some more time to experiment more with IndieWeb technologies and the “cloud-native IndieWeb” approach.

Recently I heard a lot about a new software called Roam Research. According to its website, it is “a note-taking tool for networked thought”. Especially Anne-Laure Le Cunff of NessLabs seemed to be full of praise for the application. I still remember when Evernote launched and was described as “an extension of your brain”. But Roam seems to be the one fulfilling that promise because its structure is much more like a brain. I’ve used the tool for roughly two weeks now and wanted to write a summary of my experience and why and how I use it.

Generally, I do quite a bit of reading online, and I collect information that feels important to me from the articles I read, mostly by copying verbatim quotes. I used to copy those to Evernote, where I had notes for different topics in which I would collect those quotes and their source URLs. Titles of such notes could be something like “API Design”, “Developer Experience”, “Digital Transformation”, or “Climate Change”. And this is where the problems start. For example, what about an article that covers the impact of digital transformation on climate change? It should go in both notes. Instead, I could create a note for every external piece, of course, but then the only way to connect the thoughts would be to make extensive use of tagging, which I don’t use a lot in Evernote.

Roam is a web-based combination of a wiki and an outliner. Even though you also create notes or pages, Roam makes it very easy to link different pages together, inline using hashtags (#) or double brackets ([[ ]]). Every page is a hierarchical list of hypertext paragraphs, and you can link from different hierarchy levels. The application also shows you when you have used a term for which a page exists but not linked, so you can decide whether you want to connect the thoughts or not. It can also visualize your whole database as a graph. In Roam, it is not a problem to add new articles you read as a page on their own and then establish links to the other material you have read, which makes the whole thing more comfortable and more rewarding.

I have a wide array of interests. Even my primary professional area has many interconnected aspects if you look at an API lifecycle and all the factors of an API - design, implementation, security, etc. - and then look at developer experience and developer relations, which involve, for example, technical writing. Then, there are many other areas of interest from my, such as self-development, future of work, basic income, effective altruism, and environmental issues. I don’t see different interests as separated domains but rather as various aspects of a whole that can influence each other, and where unusual connections can appear.

There are links, for example, between the API economy and the future of work. However, the picture in my mind still feels incomplete, and I lack the language to describe how it all fits together and what it means. I will continue and try organizing my thoughts in Roam, and I’m confident it will help me complete my mental model.

If there’s anything negative I can say about Roam is that it’s quite new, so it’s not sure how it will develop. It doesn’t have an API (or integrations) of its own yet, something I believe is a minimum requirement for any SaaS product launching today. Still, you can import and export data. Also, it’s free to use with no pricing or published business model yet. I assume it will be a moderate monthly subscription, but it would be nice to know for sure.

Have you tried Roam already, and do you have any tips for me to make the most of it? Please let me know what you think! Thank you!

Security is an essential aspect of API design and implementation. And while implementing proper security measures can be hard, sometimes it’s the most basic stuff that goes wrong. The most recent APIsecurity.io newsletter was a good reminder of that.

A WordPress plugin, RankMath, introduced an API endpoint into a WordPress instance. And it added this endpoint without any authentication or authorization checks, leaving it open for the world. There are very few cases where an API can deliberately omit authentication for anonymous access, for example, when you provide access to data that is public anyway. But the default approach should always be to implement authentication and test that the endpoint rejects all unauthorized requests.

Another, even more fundamental problem, occurred with the Tapplock smart lock. The IoT device used unencrypted HTTP to communicate with its server. Nobody should use unencrypted HTTP anymore, and most definitely not for APIs.

The newsletter also mentioned “broken object-level authorization” vulnerabilities in both Tapplock and another smart device, TicTocTrack. These so-called BOLA problems occur when there is proper authentication in place, but the code doesn’t check authorization for every object. It is a hard problem, and it cannot be solved in API design or with OpenAPI descriptions, but your implementation code must prevent this. Once again, testing is your friend, and tests should not cover success cases but also those you want to fail, to make sure they actually fail.

At the very least, however, make sure you have authentication in place (you can specify that in OpenAPI) and always use HTTPS!

Last night I took part in the first virtual API the Docs edition, where I listened to two great talks, one by Leah R. Tucker of {your}APIs and one by Kristof van Tomme of Pronovix. The event took place via GoToMeeting, with discussions happening in parallel on Slack. There was also an unconference part with breakout sessions happening via Google Meet, but unfortunately, I had to leave after the talks so I couldn’t join them.

Leah talked about Designing a future-proof API program. She drew parallels between supply chains and large numbers of APIs in an organization, emphasizing the need for consistency in APIs. I liked how she approached it not just from a perspective of developer experience but also a more general brand experience. That might be the right way of putting it to get buy-in from non-technical management to invest in API design and build up a data steward team.

Kristof talked about Beyond API Spray & Pray - Devportals in Digital Transformation. He described two trends of digital transformation; the redefinition of closeness by replacing physical proximity with digital proximity. The second is market complexity, for which he referred to the Cynefin framework. APIs and developer portals can help in achieving transformation. Kristof also gave an overview of different types of developer portals and the role they play.

I enjoyed both talks and the Q&A that followed them. If you’re curious about the next event, you can register on the Eventbrite page and also join the new API the Docs Slack workspace.

Meetups, events, and conferences remain canceled. That affects API the Docs as well. Just a bit over a month ago, I wrote that I am volunteering on the speaker selection committee for their Portland conference and that I’m looking forward to attending the European editions in Cologne-Bonn and Brussels later this year. Portland is not happening, and neither is Cologne-Bonn. So far, Toronto in September and Brussels in November is still on, but it remains unclear how the global crisis unfolds. I hope that politicians lift strict lockdown measures or contact restrictions soon (maybe when we have enough face masks and privacy-friendly contract tracing apps). Still, I also feel that international conferences may not happen for an entire year. Once the series starts again, I’m happy to get back on speaker selection duty.

While the speaker committee has dissolved, the speakers still have an opportunity to give their presentations, just in a different format. Instead of an all-day conference, there will be bi-weekly smaller virtual API the Docs events with two talks each. The first event is on Wednesday, April 8th. However, at the time of writing, it is already at maximum capacity. Make sure you register for an upcoming event on the Eventbrite page and also join the new API the Docs Slack workspace where the social part of the events takes place and where you can learn more about the virtual API the Docs.

The new coronavirus is slowing down public life and the economy. At the same time, however, I am observing the public discussion expand, especially on Twitter, around two topics that I am very interested in, Remote Work and Universal Basic Income (UBI).

For us lucky knowledge workers who just need a computer and an Internet connection to get work done, remote work was always an option but its global impact was limited. For every successful distributed company, there’s another one believing in “butts in seats”. That may change as at least a fraction of the people who work remotely for the first time may find it works well for them and their employers or clients. They may use this option much more in the future, with all the benefits (i.e., fewer carbon emissions from commuting) that come with it.

On the other hand, there are and always will be people who get work done with their hands and bodies out in the real world. Some of them have to continue working, but others won’t. Direct support to their employers or a reduced tax burden does not reach all of them, especially self-employed workers in the “gig economy”. Handing out cash, on the other hand, does help everyone and may be a stimulus for the economy hit by the coronavirus. It is the right time to give a temporary UBI a try or at least some one-time cash transfers to collect more data points to prove that they work.

Along with my professional interests centered around APIs and developer experience, I have always been curious about the future of work. Every software developer and other person working in IT is in some way (maybe unconsciously) building that future. I believe that the API economy is one of the cornerstones of a world that Pieter Levels described as billions of self-employed makers and a few mega-corporations. We already have the latter, but for the former to thrive, we need UBI as a safety net. And they will be working remotely.

If there’s anything good coming from the current crisis, maybe it’s kickstarting the conversations about the essential topics for the future.

The API the Docs conference series is coming back to North America with an edition in Portland on 1st May, 2020, and I’m happy to make an announcement: Along with Laura Vass, Leona Campbell, and Yuki Zalkov, I’ll be part of the speaker selection committee.

The call-for-proposals (CFP) is still open until 29th February, after which the committee will review the submitted talks and choose the ones which we feel are most interesting and valuable to the community of API practitioners.

I’ve supported API the Docs in the past by being a part of the DevPortal Awards jury in the last two years, and this year I’m excited to volunteer for the community in a different role.

While I won’t be attending the Portland conference myself, I’m looking forward to meeting you at the two European editions in Cologne-Bonn and Brussels later this year.

Let’s indulge a bit in nostalgia this weekend. I just remembered one of the websites that I used to frequent a lot around 15-20 years ago. The site was called klamm.de, and it was a German paid portal site. Or should I say, it is because if you follow the link, you might see that the site still exists. It almost looks like and has the features that were developed in its early years after the inauguration in 1999.

At the time, “getting paid for looking at ads” was the latest fad with paid email promotions, reviews, and even “surf bars” which would continuously show rotating banner ads next to your browser. All with sophisticated multi-level affiliate programs to make sure you’d invite your friends. And late teenage me was much more curious about the ideas and making some money (though I never made anything substantial) rather than being critical about advertising and the privacy-invading technology behind it as I am today.

Anyway, klamm.de was less about the earnings, but more about the community - the so-called “klammunity” and I spent quite a bit of my time on the forums of the site. Also, I assume that the site was responsible for my interest in APIs that drives my work today. How so?

At one time, klamm.de introduced “Lose” (lottery tickets) as its virtual currency, which users could bet to win prizes. At the same time, they could be traded between users. And, to drive this process, site owner Lukas Klamm (with whom I coincidentally share the first name) created an API called ExportForce. And I remember the first thing I did. I took a Javascript-based roulette game that I created as part of my high school computer science class. Then, I hooked it up to the API so that you could win “klamm Lose” playing roulette.

Of course, it was a stupid idea, because the game ran on the client and would report results to the server, so you could easily cheat. Still, it kicked off other hobby developers in the klammunity to build things around the API. And I learned a lot from it, too.

It’s interesting to see some of the “paid4” sites still around, even though earnings are minuscule, and we’re already annoyed by the advertisements we don’t get also paid for. I deleted my klamm.de account after not using it for a few years, but I’d love to log in again and take a trip back in time.

Last week, I’ve published a release of phpMAE and published an announcement post and tutorial on the CloudObjects blog. This week, as a follow-up, I have written a little about the background of the breaking changes in my open-source PHP-based sandbox for serverless/Function-as-a-Service (FaaS) development.

I finally managed to push a new release for phpMAE, my open-source PHP-based sandbox for serverless/Function-as-a-Service (FaaS) development, and (experimental) hosted service, which is a part of CloudObjects. For this release, I’ve updated the Getting Started tutorial and published it on the CloudObjects Blog just now. I’d be happy if you give it a try!

This is my first blog post in 2020, so first of all: Happy New Year! 🎇

The beginning of a new number on the calendar is a good time for some self-reflection. Among other things, I have thought about my relationship with social media again. Just as many others who spend a great deal of time on the Internet, it’s a sort of love-hate relationship. On the one hand, I enjoy the power of social media to connect people. On the other hand, it’s kind of addictive and can lead to mindless scrolling, which can be a huge timesink and makes you feel unhappy.

For Facebook, I’ve reenabled the Disable Facebook News Feed Firefox extension, which I used before but disabled at some point.

For Twitter, I’ve taken a little inspiration from Glitch CEO and blogger Anil Dash, who wrote about cleaning up his Twitter feed for the beginning of the new year. The post is from 2018, but he set off a tweet indicating he did the same thing this year.

I couldn’t convince myself to be as radical as Anil, so I used Tokimeki Unfollow instead. The application is inspired by Marie Kondo and works by showing you each of your followings with their latest tweets one by one and asks whether their latest tweets “still spark joy”. You can then choose to either unfollow or keep them. The process is also comparable to swiping through Tinder and similar apps.

I unfollowed inactive accounts, those whose tweet frequency is too high and those where I can’t remember why I started following them. I kept friends and people I’ve met in person or interacted with lately. It wasn’t a vast purge, but at least I got down from 492 to 302. My Twitter feed feels different and less overwhelming now.

For other networks, I haven’t made any changes.

Today I came across an article by Erik Dietrich called “Learning in a World Where Programming Skills Aren’t That Important”. I haven’t found the time to read Erik’s book Developer Hegemony yet, but I’ve read and enjoyed a lot of the writing on his blog.

Early in the article, he recounts his definition of an efficiencer. The difference between an efficiencer and a programmer is this: the programmer writes code while the efficiencer solves a problem.

A while ago, I wrote a post about a contract I did in which I built API-driven automation on top of Airtable, instead of continuing the custom-built CRM that the previous developer had started to create. While writing that post, I also described that I believe a developer’s job shouldn’t be writing code but solving problems. Erik’s writing partly inspired my reasoning, but at the time, I didn’t have a fancy term for it. Now, however, I believe that the project I mentioned is an excellent example for an efficiencer’s work.

I enjoy coding, and I love writing code that does something smart. I even tend to grow attached to the lines I wrote. But the value I can provide doesn’t necessarily lie in that code but in understanding requirements and solving them in the best way.

I’m happy to announce that I have launched a new profile website for my freelance consulting business. The site centers on developer content production, which I have strategically decided to focus on, although it mentions other services as well. It describes the importance of content for API providers and developer-focused companies and how I intend to help them creating and documenting sample applications for their API in eight steps.

Unlike my blog, which is in English only (some thoughts on this in my last post), the new profile website is available in two languages.

You can find the German version at lukasrosenstock.de and the English version at lukasrosenstock.de/en. Any feedback on the site is always appreciated 😊

Last night I listened to the latest @monday episode in which @macgenie interviewed @ton. It was quite inspirational; I especially liked the idea of his “Birthday Unconference”.

Something else that got me thinking was their discussion at the beginning of the episode about blogging in different languages. Ton primarily writes in English but also sometimes posts in Dutch or German. He used to dabble with separating the languages into different blogs but ultimately decided to put everything in the same feed and tag or categorize the content.

For me, I never liked this idea of having multi-lingual content on the same blog, even when it’s tagged or categorized (inconsistently, I do post in multiple languages on Facebook, though). At the same time, I probably don’t put out enough content to justify multiple websites. I used to have multiple Twitter accounts, but even that was a little cumbersome to manage.

My blog and also my tweets are mainly about the tech industry, especially the narrow API and DevRel niche. My business targets international clients, and I share a lot of external content, which mostly is in English as well. Therefore, I think it makes sense for me to focus all my writing on English language content. On top of that, as I mentioned in my recent post about motivations to blog more, I want to improve my written English. Another important aspect is that the focus on one language avoids the mental load of making the decision which language the next post should have.

On a side note: I do have one German 🇩🇪 social media presence, though, and that is my Innovators Gießen Twitter account, where I share mostly tech content with local relevance to the region where I live.

Right now, I’m sitting on a train en route to Hamburg. A friend and I have tickets for the performance by Ludovico Einaudi in the Elbphilharmonie tonight 🎹. It’s my first time in this new and iconic concert hall, so naturally, I’m excited!

I’m resuming my work next week, so if you’re trying to get in touch, please bear with me, and I will get back to you on Monday.

Recently I’ve encountered the term full-stack freelancer through an article by Tiago Forte on his Praxis blog. I had heard of full-stack developers, but I never heard that term before, so I was intrigued. Tiago defines such a person as someone who has a broad portfolio of different projects and receives multiple income streams that come through varied activities. It’s the opposite of a freelance expert who specializes in a single offering in a specific niche.

I don’t want to go in-depth at the moment regarding the entire concept, but I’d like to highlight one of his thoughts that was a proverbial lightbulb moment for me. After thinking about it, I realized it’s obvious, though I can’t remember someone explicitly stating this thought.

The idea is that certain activities are impossible to focus on as a full-time position or have greatly diminishing returns, but doing them in moderation can be extremely beneficial.

For me, paid guest posts are one such activity. I’ve done quite a few in the past. They have provided me exposure, some money, and the ability to learn a lot, which I could then apply to other gigs, such as software development projects. I mentioned this briefly in my post yesterday about the motivation to write more. However, I could never be a full-time blogger because I would soon run out of ideas and lucrative opportunities to write. It’s valuable to do this infrequently, though.

Tiago mentions other things that he does once in a while, such as coaching and consulting, which are part of his varied portfolio.

For me, this ties into the discussion between generalists and specialists and the hybrid variant, the T-shaped skills. It also adds to the idea of a gig economy as the future of work. Different projects could allow a person to focus on the middle of the T while having occasional contracts that help with the ends of the T, with every client benefitting as a result.