Go 1.26: 240% better! ๐ This episode: 340% longer! ๐ Venn: 100% awesome ๐
Oh, it's that time of year where I need to edit the card from saying 2025 to 2026. This show is supported by you. This is Capago for 01/09/2026. Keep up to date with the important happenings in the Go community in about fifteen minutes per week. Yeah.
Shay Nehmad:Not this time. Not this time. I'm Shay Nehmad.
Jonathan Hall:I'm Jonathan Hall.
Shay Nehmad:Happy New Year,
Jonathan Hall:Jonathan Merry Christmas. Happy Hanukkah. All those fun holiday things that we all did.
Shay Nehmad:Yeah. Now it's time for getting back to bee business. Woo hoo. 20 honey sticks, back in bee business. Alright.
Shay Nehmad:We have one update about conferences. And then we're are going to have a sort of a different episode. If this is your first time joining us, this is not our normal pace. We're just gonna go through everything that happened in Go one twenty six and try to summarize it so you know what's this new release about.
Jonathan Hall:We should say what's about to happen because it's not released yet. It should come out next month. But we have a pretty good idea what it's gonna contain. So, yes, as you mentioned, conferences. A couple episodes ago, which feels like sometime late last year, I don't know when that was, we had an interview with organizers from GopherCon.
Jonathan Hall:I have a long history with Go, I guess. I worked at Disney and we started playing with it early on, me and some coworkers in, like, 2009, 2010, late two thousand nine, early two thousand ten when it was first released. Seemed interesting. Didn't really pick it up then. Tickets are finally on sale and they reached out to me directly and asked me to let listeners know if you're fast, you can get a $200 discount before the January.
Shay Nehmad:Hot damn. $200 is nothing to sneeze at.
Jonathan Hall:That's right. Gophercon.com, early gopher tickets. There's only a there's a total of 100, and I bought one. So there's only 99 left. Actually, says 76 as of this recording.
Jonathan Hall:So the first 76 listeners to head over to gophercon.com and save $200 on their Gopher con ticket. Gopher con's coming up in August year in Seattle. We'd love to see you there.
Shay Nehmad:We'll we'll probably travel. Oh, in Seattle? Yeah. I might fly over.
Jonathan Hall:You better go by your ticket quick.
Shay Nehmad:Honestly, I might. I might just.
Jonathan Hall:But also, there may be some sort of collaboration between the show and the conference happening at the conference. Details to be determined. We're working on something like
Shay Nehmad:Yeah. But
Jonathan Hall:go go buy your tickets now before they're before the cheap ones are gone.
Shay Nehmad:Yeah. So you can scalp them later. Oh, wait. No. No.
Shay Nehmad:Don't don't do that. So Go one twenty six is coming out in February, and I've waited for Anton to pull out the Uh-huh. Release notes. Luckily, he did it just in time. This is one of the most exciting releases I've seen, mostly because none of it is controversial.
Shay Nehmad:It's all good. There's nothing I look at here and I'm like, you know, like, generics or whatever. It was very, oh, why are you doing this this is just improvement y release, which is, I think, a super strong benefit of Go. Yeah. You know, and we'll we'll talk about all the features.
Shay Nehmad:But I just wanna say, I am a 100% using Anton's release notes. We'll put a a link in the show notes.
Jonathan Hall:Also link to the official show notes, which are the the boring, non interactive version. But they're official Yes. So you can look at both.
Shay Nehmad:We we had an interview with Anton. Of course, he was on the show.
Anton:Okay. My my name is Anton. I do some open source stuff, and, I write, interactive, maybe I can call them guides or books and, interactive articles on my blog. That's mostly what I do in my free time.
Shay Nehmad:Yes. I agree with you, Jonathan. The official release notes are pretty dry. So I am a 100% using Anton's ones, which I highly recommend. So it's a really good service for the Go community.
Shay Nehmad:And, yeah, let's get it started. Who's kicking who's kicking it off? Is it me or you?
Jonathan Hall:I I wanna I wanna start with something new. Nice. Yeah. The whole episode is new. Right?
Jonathan Hall:There there's a new newness in Go one twenty six.
Shay Nehmad:New is always better.
Jonathan Hall:Always. We've talked about this before, and I've ranted about this even on a video I did a couple years ago, 10 things I hate about Go. One of the first things in my list was that you can't make a pointer to a literal. So, like, this frequently comes up if you're building a struct and you need to set some default values, but one of the fields is a pointer. Like, maybe it's maybe it's a person struct and you have a name in there, maybe you don't know the name all the time, so you want it to be sometimes it's a pointer to string, right?
Jonathan Hall:And you want to say, now I know the person's name, it's Bob. You can't just create a pointer to the string Bob. Like, can't do ampersands. Yeah, you have to put
Shay Nehmad:it in a variable and then use the ampersand, and it's like, wait, why am I writing c again?
Jonathan Hall:And then virtually every single project out there has its own utility function to do this for you. Claude recreates it every every time it needs it. Well, now that won't be necessary. The new keyword, which already existed, now accepts a value. Previously, it only took a type, so you could say, like, new int.
Jonathan Hall:Now you could say new 42, and it creates a pointer to that value for you.
Shay Nehmad:Not just val like normal values like a bool or an int, also composite values, so like a list of numbers or a full struct or and even a function call. So I I imagine the last one might be nice for people who are trying to do functional ish things with Go. You know what I mean? So instead of defining a function, putting it in a variable, then putting an ampersand onto it where it makes the whole thing feel less functional because it's more readable. Now they could have more parentheses just like they like from their Lispy languages.
Jonathan Hall:Yay for Lisp.
Shay Nehmad:What is not allowed to be new still? Nil. No new nils. No new nils in 2026.
Jonathan Hall:Nil nil can't be a pointer to a like, you can have a nil pointer, but you can't have a nil nil? You can't
Shay Nehmad:point to nil. Yeah. So That makes sense. Yep. I remember from my c plus plus days that pointing to zero zero zero zero was, a specific thing.
Shay Nehmad:Mhmm.
Jonathan Hall:I don't
Shay Nehmad:know if that's a thing in Go. Anyway, new things. Another improvement to the language of a thing that was before and now is slightly better, talking about generics, is improvement to generics. So generic, like function or a generic type, takes a type as a parameter. Right?
Shay Nehmad:So let's say I have a function and it's called reverse, gets a list of whatevers, let's call them t, and reverses them.
Jonathan Hall:Right.
Shay Nehmad:A slice of t's. Yeah. Right? Mhmm. Sometimes you wanna restrict these tees.
Shay Nehmad:Right? The type parameters. So you don't want reverse to be able to accept anything. You want it to be able to accept generically something that is, for example, comparable. That's a super Yeah.
Shay Nehmad:Common use case. So even before 01/26, you could restrict type parameters and say, t comparable. But you couldn't refer to type parameters within type parameters. So, like, recursively constrain types. This sounds confusing, more confusing than it is, but a good example is if you want to define the ordered type on like a generic ordered type.
Shay Nehmad:Right? So you're saying, oh, it's ordered t, but the other thing also has to be ordered and also has to be t. So now you can say ordered that gets a t, ordered t interface.
Jonathan Hall:So to clarify, like, I I think you may have misspoken a little bit. I think you said you can't have type parameters in your type parameters. But you mean you can't have recursive type parameters? Like, you you could have a p that references a t, but it can't
Shay Nehmad:it can't in turn reference a p. Recursive. Yeah. Yes. Yes.
Shay Nehmad:You could, like, have a map that has, you know, k and v, and v is compare comparable.
Jonathan Hall:Right.
Shay Nehmad:They so sorry. K is comparable, but you can't have a map from k comparable to k compare k in the k comparable. Like, can't have well, now you can. If you're using a lot of generics, I think it's useful for, you know, people who maybe write generators or trying to deal with complex data structures and algorithms. It's for you.
Shay Nehmad:Honestly, if not, just know that generics are slightly better and you can continue ignoring them.
Jonathan Hall:This is one of those things that I feel like I wanted at once, but I can't remember why. Yeah. So, anyway. Moving on. Yeah.
Jonathan Hall:Another generic improvement for error checking. You probably used errors. As before, I imagine.
Shay Nehmad:All my errors are as Sorry. It was just too easy.
Jonathan Hall:I'm not even sure what that means, but okay.
Shay Nehmad:I don't like it because when I handle errors in other languages, it's exception catching. It's pretty easy to do, like, catch exceptions of this type Yeah. Catch exceptions of that type. It makes the entire other parts of the language much worse, but that specific part of, like, narrowing the type of the exception I wanna check is normally easier.
Jonathan Hall:Yeah.
Shay Nehmad:And in Go, I have to do, like, if errors as is something of a specific thing, and again, I have to use an ampersand. Yeah. Like, with newt.
Jonathan Hall:You do. Yeah. And I think I think the biggest problem with errors. As is the pointer semantics are are confusing at two different levels, because you have to, like, pass a pointer to an error that the error can be cast into effectively. But sometimes it needs to be a pointer to a pointer to an error, depending on whether that error itself is a pointer or or not.
Jonathan Hall:In it and, like, you have to really dig in, write a lot of tests to make sure you're doing it correctly. I don't have to write tests anymore probably after this. Thank goodness. Because I hate tests. No.
Jonathan Hall:It's not true. But the the changes, now they're have added errors dot as type, which is a generic function, which takes the error type as the the type parameter.
Shay Nehmad:Yeah. The generic type parameter. So the thing in the square brackets.
Jonathan Hall:Exactly. So it's a little bit easier. It's maybe longer to type, but it's also on one line only instead of two. You don't have to do the little ampersand definition first thing. But I think it will help solve a lot of that.
Jonathan Hall:Like, is this a pointer or a pointer to a pointer? All that sort of confusion I think will
Shay Nehmad:go away. It from test to compile time? Yeah. Way better. And also, by the way, I think it's super readable.
Shay Nehmad:Like Mhmm. If errors add as type, specific error type, and then the value. Yep. That makes a lot of sense
Jonathan Hall:when just It just returns that to you, which is more normal than the idea of, like, pass in the return value and let it be modified in, you know, internally.
Shay Nehmad:The output parameter. I hate that stuff.
Jonathan Hall:Those are annoying.
Shay Nehmad:And honestly, it's a just a recommended drop in improvement. Right? Because it does everything that AS does. Well, maybe we'll see it in GoFix, which we'll talk about towards the end of this
Jonathan Hall:Yeah. In about three hours when we get there.
Shay Nehmad:No. No. No. Let's move fast. Okay.
Shay Nehmad:Next thing. Green tea garbage collector. So we talked a lot about the green tea garbage collector. I'm gonna just not do it justice. It's a better garbage collector on average.
Shay Nehmad:Now garbage collection is a super interesting problem, and Go obviously is a garbage collected, like, memory managed language. Right? It was introduced as an experimental in 01/25, so we talked about it, like, nine months ago ish at length. Should be more efficient on modern computers with many CPU cores. So it's not strictly better in every case.
Shay Nehmad:It's a trade off. Normally, garbage collection that, you know, you learn in university, it's like mark and sweep. It's graph algorithms where you have the root of the program, and then the root of a program allocates things, and then these things allocate things, and eventually, have a graph. The garbage collector scans the nodes in that graph, right, and, like, colors them. And then if it colors the same thing twice, oh, now I can remove it from memory, these sorts of algorithms.
Shay Nehmad:Treating objects as nodes and pointers and edges and whatever doesn't really take into consideration, like, the physical structure of memory. With modern computers and and, you know, modern whatever, this is sort of like random access memory. And if you learned, like, in university, you know, about hard disks and whatever, when that was a constraint, when disks were still spinning and it wasn't all NVMe and SSDs, you wanted to do things that were closer together. I know, Jonathan, you like to hate on Windows, but the best the reason I understand this topic is because I used to really love clicking on the defrag your c drive Yeah. You remember that, watching that little thing go by in color?
Shay Nehmad:Oh my god. So satisfying. Honestly, if someone implements a screensaver for that, I would I would totally install it. Anyway, the CPU just waits a ton of time because of this implementation in modern computers where you can where you have bigger page sizes. So GreenTea, instead of being like nodes and objects, is like memory where it looks at instead of individual objects, it's like, I'll scan the entire memory in in in spans of eight kilobytes and find small objects that I can, like, just yeet.
Shay Nehmad:And the algorithm is interesting. If you're interested in this stuff, I'd recommend you just look at it. But honestly, it just should be 40% better in garbage collection overhead in real world programs that run on modern CPUs. And if you have, like, extra modern CPUs that support SIMD, which we'll talk about in a second, it's even 10% more. But there are no public benchmarks.
Jonathan Hall:That's 40%. What did you say what the numbers were? 30% better and 10% more?
Shay Nehmad:It was It's like 440%?
Jonathan Hall:That's 40% more of betterness. Yeah. That's that's how percentages work. Right?
Shay Nehmad:Yeah. 60% of the time, it works every time. That doesn't make sense. Anyway, this is enabled by default now. So if your programs suddenly get faster or specifically, I think you should care about this if you're not running on modern hardware, like back end services on Kubernetes, whatever, you can disable it and use the go garb the old garbage collector by doing no green t g c.
Shay Nehmad:And if you do do that, please open an issue for the Go team to let them know that what case was inefficient. Because what I imagine will happen, since this option is removed in 01/27, is that in 01/27, they'll find all the little trade offs and somehow smooth them out in this implementation and end up with a hybrid. At least that's what I hope will happen so everybody can enjoy. Mhmm. Green tea, everybody.
Shay Nehmad:Not supported by Cup of Go, which is strictly a coffee
Jonathan Hall:That's right.
Shay Nehmad:Based program.
Jonathan Hall:Alright. Next up, more percentage of improvements, faster Sego and syscalls. I wasn't gonna go into the details here, but I just can't resist the urge to mispronounce a few words. So Okay.
Shay Nehmad:Great great program great idea for a audio based show.
Jonathan Hall:That's right. That's right. So in Go, we have these things called processors, not to be confused with, like, a physical processor, but, like, represents a programming unit that can process something. Represented as p, and previously, we had the idea of pruning, piddle, and pig stop. It's not quite right.
Jonathan Hall:P running, p idle, and p g c stop. And also p syscall. These are different states that a processor could be in. They have eliminated p syscall, which makes certain syscalls faster, especially in Seagull, up to 30% performance improvement. So we add that to the 40 you just talked about, we're already 70% better.
Shay Nehmad:Hot diggity damn. Closely related to that, except
Jonathan Hall:not really, but it's performance.
Shay Nehmad:But wait. It's only for CGO. Right? It's for it's So who who who from that and who doesn't benefit from that? What does CGO mean?
Jonathan Hall:If you use CGO, then it it benefits you. CGO is c code called from Go, basically.
Shay Nehmad:Got it.
Jonathan Hall:Probably not gonna affect most of your programs most of the time. If it does,
Shay Nehmad:then It's we're your Go not using and standard library and whatever, and it's not relevant, but honestly, a lot of programs do use CGO for library bindings, like if you use a graphic library or a database driver.
Jonathan Hall:Sure. Or FFmpeg or maybe maybe SQLite or know, there's a whole bunch of things you might be using for.
Shay Nehmad:Yeah. If there there's a specific syscall you wanna call that isn't exposed by the standard library syscall package.
Jonathan Hall:Cool. Also, on performance related issues, faster memory allocation for small objects. I'm not gonna go into the details here, but if you're allocating memory for objects up to a half a kilobyte, up to five twelve bytes, it's faster now, up to 30% faster. I think we're a 100% better Go already if we add those percentages together.
Shay Nehmad:Go runtime drops to zero for every program. Apparently, solving computing, it was easy. You just had to use Go.
Jonathan Hall:To to be serious, though, the Go team estimates a 1% improvement overall from this 30% improvement to specific cases. So that's still pretty pretty impressive.
Shay Nehmad:Yeah. Just shaving 1% off. I also love I know you didn't wanna go into the details, but I actually love how they got this improvement, which was take the general purpose implementation for memory allocation and just putting in a specific jump table for a small object.
Jonathan Hall:Oh, hell.
Shay Nehmad:So, like, take this hot path and just implement it using jumps, you know, to make it as best as possible, and then move on to the, like, general purpose implementations for bigger objects. Sometimes it's a simple thing. Not simple. And for our data science, math, people in the audience, there are support, experimental support for vectorized operations, which immediately shakes me to my core because I don't know math, but I'll try to explain it. This is actually a hardware architecture specific thing and not a not a math thing that allows you to run SIMD, s I m d, single instruction, multiple data operations.
Shay Nehmad:So like a low level package, it allows you to run a single instruction that does, like, calculation with a lot of data. If you're like me and you're coming from assembly, but you're also like me and you learned assembly, like, a simple assembler language, like Turbo Debugger or whatever, you're aware that you can like compare two registers or, you know, multiply them like or move them or whatever. But the SIMD operations allow you to do it with like large vectors of numbers and like add or compare or convert or mask or rearrange or divide or dot product, which I think is the highlight one, a bunch of vectors. And why would you wanna do that? This is super relevant for recently relevant use cases.
Shay Nehmad:So, you know, video and audio encoding, ML inference, scientific computing, and even things like crypto and compression, like, could benefit from SIMD instructions. This is experimental still, so it only uses a specific set of instructions. So it's probably relevant for specific architectures. And honestly, it probably has a lot of bugs, but try. Try try to use it and give them feedback, and you can see that the benchmark is, like, so much faster, which is not surprising at all.
Shay Nehmad:Right?
Jonathan Hall:What percentage faster? Because that's what matters.
Shay Nehmad:I can quickly try and compare, but it looks like at least an order of magnitude better. So if you could do, you know, add on on two large vectors, It used to be a 100 megabytes per second, and now it's not a 111,000 megabytes per second, and now it's
Jonathan Hall:So it's like 90% faster.
Shay Nehmad:Now it's gonna be a 127,000 megabytes per second. I will say this is like a GPU y thing that you can do on the CPU, so I think it's gonna be really good for things like Olama. And, you know, if you're trying to build all these things and not rely on people having GPUs, which I feel is a trend. I don't know about you, but I feel like a lot of people want to do like in browser small language models or on device, you know, on your phone audio to text, like running hugging face models and whatever. Know, I don't wanna get into the whole AI and where it runs debate, but I feel like people are trying to run AI in more places.
Shay Nehmad:This is like the sort of a back back backbone operation that eventually will lead to people easily writing these, you know, like people like the Hugo people we had on the show, like developing these libraries and and tools for this stuff. Cool things. So basically, we
Jonathan Hall:can say Go has MMX support now. Right? Like like it's 1993 again? Alright.
Shay Nehmad:We can build this new again.
Jonathan Hall:Yes. That's right.
Shay Nehmad:I've got a secret. Uh-oh. I've been hiding under my skin. Alright. It's 2026.
Jonathan Hall:Duda, do do you want to know a secret?
Shay Nehmad:Oh, no. I'm referring to mister Roboto.
Jonathan Hall:Right? I was referring to a different song.
Shay Nehmad:Yeah. But it's 2026. No?
Jonathan Hall:Oh, okay. Very very good.
Shay Nehmad:Yeah. So Mr. Roboto has secrets, and Go now has a secret mode. We discussed this pretty recently.
Jonathan Hall:Yeah.
Shay Nehmad:I'm gonna go over it very briefly. But when you're using things like TLS let's say you're setting up a web server and you wanna protect it. Right, Jonathan?
Jonathan Hall:I always wanna protect my web servers.
Shay Nehmad:So you set up TLS. What properties would you like TLS to do for you? What does it do?
Jonathan Hall:It makes things magically secure.
Shay Nehmad:So more specifically It encrypts
Jonathan Hall:my it encrypts my data.
Shay Nehmad:Yeah. So it encrypts
Jonathan Hall:your Prying eyes. Yeah. Prying eyes can't see what I'm sending and receiving.
Shay Nehmad:Using something called a private key. Right? Yes. And what happens if you leave your private key out? You know, you're going out to Starbucks, you're getting their 500,000 calorie milkshake for the season Mhmm.
Shay Nehmad:And you're leaving your laptop open, someone comes in, steals your private
Jonathan Hall:key. Then they could ring my
Shay Nehmad:my secret messages. So ideally not, because there's something called forward secrecy, which not a lot of people know about. But even if an attacker gains access to long term secrets, like your private key that you left on your unlocked laptop in Starbucks, not secure.
Jonathan Hall:I won't do
Arthur Vaverko:that again.
Shay Nehmad:They shouldn't be able to decrypt past communication sessions. So, you know, while they have your private key, they can decrypt current ones, but past ones, they shouldn't be able to, and then you rotate the key and everything's good. And to do that, you have, like, ephemeral keys, so, like, you know, a session key. So not only a private key, but also a session key. But if someone has access to your server, they might just write down these ephemeral keys as well, these session keys from memory.
Shay Nehmad:And in, you know, like Rust or C, you could immediately delete these keys from memory immediately after using them. But in Go, you don't manage the memory, so you can't really do that. Well, now with secret mode, you can open, like, a closure in which everything gets immediately deleted and not only deleted, zeroed out. So even if you have, like, a stack overflow, buffer overflow, you know, some sort of a memory leak between processes, you'll only see zeros there. As soon as the garbage collector decides they are no longer reachable, so even within the secret closure, it's super fast.
Jonathan Hall:Oh, wow. Okay.
Shay Nehmad:If you have sensitive information that doesn't need to stay in memory longer, do that. I will say, don't overuse this unless you know you need a secret mode, because this is clearly super inefficient.
Jonathan Hall:Right. And
Shay Nehmad:you you know, you don't wanna do that if you don't need to. It's not meant for normal developers. It's basically for developers who implement cryptographic libraries and apps only. And if you're writing an app, you should just make sure that your library does use secret dot do behind the scenes and not do it yourself. Probably.
Shay Nehmad:Basically, only aimed at, like, Filippo. Right? But, Filippo, I like I hope you like your feature.
Jonathan Hall:I have a crypto one I wanna talk about too. It's called readerless cryptography. Here here's the the long and short of it. So when reading from this random input string, the APIs do not commit to a specific way of using the random bytes from the reader. If there's ever any change to the cryptographic algorithm that changes the way those bytes are used, It could, if a program is mistakenly tied to a specific implementation, it could start to fail in strange ways.
Jonathan Hall:To circumvent this, Go made an interesting decision a while back. I don't know how long ago. But they actually ignore the input from the reader without telling you. What the hell? To be safer.
Jonathan Hall:Okay. And the change here is to make that more explicit.
Shay Nehmad:So that's about secret mode, but we have more crypto stuff to get into.
Jonathan Hall:We sure do. I have a couple of different things here. The first one, readerless cryptography. I love that kind. That's where I write a secret and nobody ever reads it.
Jonathan Hall:It's kinda like writing to Dev Knoll. Wait a minute.
Shay Nehmad:That's not
Jonathan Hall:what this is. So readerless cryptography. There are a number of crypto functions that take an IO reader for for random data. Hopefully, you're always using crypto.rand for this, not math.rand. Although math dot rand might make sense for for testing or whatever, but for real stuff, you always only use crypto rand.
Jonathan Hall:The problem is sometimes a crypto algorithm might change internally in a way that that changes the way it uses those bytes. So if you have despite mostly a matter for tests, but I suppose it could matter in other cases too. If you have generated a key this way and codified it in a test, for example, and then you upgrade Go versions to a new version that uses a different algorithm, your test might suddenly start failing in in unexpected ways.
Shay Nehmad:Not because of any any hypotheses that the test is actually testing that's actually failing, just like the sequence or the amount of bytes read is different.
Jonathan Hall:So to solve this problem in a roundabout way, the Go team has made the decision to just ignore those readers now.
Shay Nehmad:So it doesn't matter what you pass. It's gonna use the same, like, random source?
Jonathan Hall:It's gonna use a a a correct cryptographically secure random source regardless of what you pass in.
Shay Nehmad:And here's a stupid question. Yeah. Why not remove the parameter from the function?
Jonathan Hall:Yeah. Because that would be breaking a breaking change. Right? So I suppose they could do that in a future version. They could add two version two two methods right now.
Jonathan Hall:Right? Two functions. One that takes it, one that doesn't. But
Shay Nehmad:But this is like a total API breaking change.
Jonathan Hall:Yeah. You
Shay Nehmad:know? All your programs are gonna fail on expected one parameter, but got two. So they don't want that, so they're just ignoring the second one.
Jonathan Hall:Exactly. And this this applies to a dozen or so different, functions in various crypto algorithms. You don't need
Shay Nehmad:to change what you're doing. Unless you're writing crypto.
Jonathan Hall:Unless unless you're testing, it'll suddenly start breaking. Oh, yeah. If you have tests that depend on on something like this, they will start being random when you were not when you were expecting them to be deterministic, so you'll have to write new tests.
Shay Nehmad:Luckily, this change also includes the set global random function in the crypto test library. So you'll have to change your test, but only once. Yep. Cool. Cool.
Shay Nehmad:Cool. Another crypto thing happening in the crypto sphere, which honestly sounds like that new structure in Las Vegas, which I'm planning to go to next week, is a package that implements hybrid public key encryption, HPKE, which has an RFC and whatever. You can go read it. It's a standard a new standard for a hybrid encryption. You they
Jonathan Hall:just jump straight to electric?
Shay Nehmad:Nice. Honestly, I had a friend get stuck with an electric car with a Rivian on Tahoe and run out of battery because the only supercharger is in Jackson.
Jonathan Hall:Oh, no.
Shay Nehmad:And then he waited twelve hours for a charger spot. So I think there is still a
Jonathan Hall:Okay. So that won't happen with this library.
Shay Nehmad:Yeah. For sure. If you learned about cryptography and asymmetric cryptography, you're probably aware of the concept of a public key and a private key, and, you know, I give the whole world my public key and people can encrypt with it, but I can only decrypt it with my private key, etcetera, etcetera, to create, like, a shared secret without having to, switch private keys between, people. But that doesn't work for, like, really big pieces of data. It's mostly used today.
Shay Nehmad:RSA and the current algorithms are currently mostly used to handle, here is my key, here is your key, let's do that with RSA, and then the rest of our communication is using symmetric encryption. This, like, HPKE implementation sensibly should have all the benefits of public key systems, but be able to encrypt very large files or messages. There's a lot of implementation things going on, but this is the right way to do public key encryption right now. So if you've ever implemented like RSA or used one of these algorithms, you should switch. Also, you know, as time goes on, people find more and more ways to, defeat old algorithms.
Shay Nehmad:One example I like is that Shamir, the original researcher from the RSA, so he's the s in RSA, he put out a research from Ben Gurion University that if you put a phone have you heard about this side channel thing?
Jonathan Hall:I don't think so.
Shay Nehmad:You put a phone next to a desktop computer on the same desk that is encrypting thing like private keys in RSA, and using the phone's gyro, you can if it uses the same key, which is very normal for a, you know, a HTTPS server, they were able to figure out with a very high level of certainty the key after a few hours just based on the desk's vibrations, which is crazy. Right? I think it was a combination of vibrations and audio from the processor or whatever, But probably a newer algorithm, if it doesn't have any glaring security vulnerabilities, which at this point, if it's an, you know, a RFC and it is a standard, it passed the test, probably better than, like, I don't know, RSA, which I think is eighties or nineties tech. So a very cool encryption public key encryption way, and you should probably go do that. It's even fast like, it's also faster, so, like, why not.
Shay Nehmad:Right? Right.
Jonathan Hall:Alright. Next up, enough enough with crypto stuff. Let's talk about Goroutine Leaks. The new version actually has two related features. One is Goroutine Leaks profiling, which is currently experimental, but this relates to this is new newly exposed through PPROF, where you may already know how to do memory profiling, CP profiling, and so on.
Jonathan Hall:This will do help you profile for go routine leaks.
Shay Nehmad:What are go routine leaks?
Jonathan Hall:Yeah. That's when your go routine drips all over your desk and lets the phone read your your secret keys.
Shay Nehmad:Yeah. So it's just, like, stuck in a sync operation. Right?
Jonathan Hall:Yeah. Yeah. If you have a go routine that probably isn't doing anything or maybe it's stuck in an infinite loop, but it it's it's not exiting
Shay Nehmad:as you So I can detect them today using Go one twenty four using sync test, right, which we talked about at the show at Lamb.
Jonathan Hall:If you've written an appropriate test, that would that would help, but
Shay Nehmad:This is in production. Yeah. Which is way better. These things usually happen in prod and not in tests anyway, right? Yeah.
Shay Nehmad:I just want I just used PyLeak to do the same thing in in Python, trying to migrate from, like, sync to async APIs in Python.
Jonathan Hall:Oh,
Shay Nehmad:yeah. And boy, it's crazy to see how much better this stuff is in Go. We're just like I don't know how to what's the word I'm looking for? Maybe commiserating? Mhmm.
Shay Nehmad:All the listeners who listen to this show aspirationally, but get their paycheck from writing Python code. Here's here's us commiserating together that we have to use things like PyLeak and the event loop and whatever instead of just using these very awesome tools. And there's another awesome tool in the metrics. Right?
Jonathan Hall:Yeah. So runtime metrics is a new package that gives or don't if it's a completely new package, but it has new capabilities to give you go routine metrics at runtime. So if you which I'm sure is what the profiling is built on top of. But if you wanna do some lower level stuff and do your own profiling or or metrics gathering, you can do that too
Shay Nehmad:with run time metrics. I imagine you put this in, like, your Grafana dashboard or whatever.
Jonathan Hall:Mhmm.
Shay Nehmad:And then if the numbers go up, that's bad. Right? Because it means you have a lot of goroutines waiting. And maybe your CPU can handle the demand or whatever. You know, you just can you can count how many goroutines you have and also how many OS threads you have, which is also an interesting like, if you have a imbalance there of, like, a single thread running a lot of Go routines, maybe you wanna rearchitecture your, like, concurrency thing to have fewer to, like, utilize cores more efficiently or whatever.
Shay Nehmad:More metrics. DevOps people are gonna be happy. Right? Alright. Reflective iterators?
Jonathan Hall:Yes. This is fun. So we we have you may recall the iterators were added, the iter package and
Shay Nehmad:Of course. You hated it with the iter two, zip two.
Jonathan Hall:You hated like the naming. I love the functionality. I don't like the naming. Anyway, they were adding that now as we've started doing over the last few versions to new libraries. Now the reflect package gets some iterators to iterate over fields in a struct, things like that.
Jonathan Hall:So it's just a natural evolution of adding new capabilities to the reflect package.
Shay Nehmad:Yeah. I like the example, again, shouting out specifically Anton here, where you can reflect type four on file path dot walk func. So it you can look at the ins and outs. Literally, that's the function names, which is very cool, of this type. So I don't know.
Shay Nehmad:I thought it was very cool. Values and methods, like using the, you know, reflective iterators thing. I don't know where I'll where I'll use it. I've never had the need for it, but it's very
Jonathan Hall:I don't use reflection very much, but if I do in the future, I'll probably use it, but don't use it that much. Another one I won't use very much, but I'll be happy it's there when I do need it, is the ability to peek into a buffer. And we talked about this many episodes ago, I think when it was approved. So I think it's byte stop buffer did not give you this ability before to let you peek into I think you could do it on a file, but not on a byte stop buffer, which is basically just read the the next byte in the buffer without consuming it. That's been added.
Jonathan Hall:It makes certain operations a lot more efficient because
Shay Nehmad:not specifically, like JSON parsing. JSON parsing, whatever.
Jonathan Hall:JSON yeah. JSON Yeah. Or or any kind of parsing where you're like, I don't know if the next thing is a string or an array, you know, using the JSON as an example. You can look at that and read the next byte. Oh, is it a curly brace?
Jonathan Hall:Is it a square bracket? What is it? And based on that, I can then branch the next thing that consumes the byte. Prior to this, if you needed to do that, you'd have to, like, reconstruct your IO reader in a in a weird way to to the same function.
Shay Nehmad:I will I will slightly correct you and say that you can read as many you can peek into as many bytes as you want. It gets a parameter. But normally, you would just look at the next character to, like, or a few bytes. This is also useful, you know, let's say, not just JSON on MarshLink. You're writing a file type thing and you wanna read the first magic bytes, right, or things like that.
Shay Nehmad:It is kind of weird, in my opinion, that it doesn't give you a copy of the data. It gives you a pointer to the real data in the buffer. It makes a lot of sense that that's the default behavior because you can always clone, but it is risky because you can peek into a thing. And to me, peeking seems sounds like a read only slice of data that I'm getting, but then I can edit it because there's no such thing as immutability, and I'm actually editing the underlying buffer. So use with caution.
Shay Nehmad:What's the thing the American thing? Buyer beware. Right? A smaller one, process handle. So if you're using Go on Linux and Windows, instead of doing stuff with processes using the process ID, which might change and isn't the best, especially, you know, you're running in a container, not in a container, but it can get confusing.
Shay Nehmad:You can get the file handle, which is like a This is the the Not file, sorry. Process handle, I meant to say. So you can get the handle to the process, which is guaranteed to refer to the process even if it died. Right? Because what what can happen is you get a process ID, by the time you want to do something with that process ID, another process took that, like, PIN, took that number.
Shay Nehmad:But if you have a handle, it's guaranteed to refer to the process even if it's already terminated. Like, crashed, but you're still looking at the same thing, and that way you don't get mixed up with logs and whatever. A great, great suggestion and improvement from a KIR coalition. And, you know, it makes sense when you see who's the who are the people behind these. You know, we mentioned the crypto stuff is obviously Filipo is the person who suggested those.
Shay Nehmad:This one is from Kyr Kolishin, who is the RunC, like, open containers contributor, and he writes, you know, pod man and all these things, where missing the process and, you know, grabbing the wrong container is probably a thing he deals with a lot.
Jonathan Hall:Cool. Next up, signal as cause. Have you ever used the signal dot notify dot notify context function before?
Shay Nehmad:Yeah. To grab, like, a signal when someone kills the server. These are the things, right?
Jonathan Hall:Yeah. Typical thing is you maybe that's one of the very first things you do when you're starting up a server process. You signal notify context to create a sort of a global context that is cancelled when someone terminates the program.
Shay Nehmad:So you control c if someone controls c, you want to try and close the all the connections nicely, and then if they control c again, they're like, yeah, yeah, whatever. I'm I'm alright. Got it.
Jonathan Hall:Right. So, to now, when that happens, you just get a context canceled error. So, your database operation that was in the middle of happening, you hit control c, it just gets context canceled. You don't know if it was contact get canceled because of that or because of some other reason. Well, signal notify context will still return the context canceled, but it it decorates that context with a cause, which will tell you which signal caused the cancellation.
Jonathan Hall:So if you care in your program, you can examine that context after it's been canceled to see was this canceled due to user interaction or due to a signal, and then do what you want with it. So that's cool.
Shay Nehmad:It is the string of the signal.
Jonathan Hall:Yeah. So it just returns an error string. It doesn't it doesn't give you any clean way to map that back to the signal itself. Although, there has been some comments on the issue tracker to to hopefully add that in a backward compatible way. We'll see if that gives me traction.
Shay Nehmad:I it does have like uses, you know, that I probably don't think about, but the original thing was, oh, using cancel cause func, whatever. Like, it's it has uses that I don't understand, but it would be nice to just print out the the signal you've got. Mhmm. I guess. A simple one that if you implemented yourself, you're probably, ugh.
Shay Nehmad:Why did they add it after I implemented it myself? Comparing IP subnets. So why would you want to compare an IP subnet to begin with? What is what is even an IP subnet? I don't know how how far to get in the in this rabbit hole.
Shay Nehmad:An IP subnet is the thing where you have ten point zero point zero slash 16. Yeah. So it's pointing at a bunch of addresses. Why would you like sort or need to compare subnets at in general? It's when subnet relationships matter.
Shay Nehmad:So if you're writing a routing table, or if you wanna know if one IP address contains another, So if someone configures like, oh, these are my IP subnets that I wanna support, you can tell them, oh, you can merge these two items into one because they're containing. Or if you want to check firewall correctness, you know, if you want to allow all addresses but deny some of them and then the deny is bigger than the allow, it contains more addresses, then that's just wrong. Right? And you wanna highlight in whatever firewall you're implementing right now that this is wrong for many other reasons. I also think one reason is for people who are writing sim systems, like for security, know, to make lookups faster and whatever, if you need to compare them, basically.
Shay Nehmad:So now you have parse prefix, and you can easily sort by the net ip dot prefix dot compare function. This the sorting is not obvious, because what if you pass a invalid IP? Or what happens if you compare I p v four and I p v six?
Jonathan Hall:I would expect I p v four to sort within the I p v six space that it contains it? Nope. No? Okay.
Shay Nehmad:I don't know. It's always hard. It's always hard because this is not a natural thing, but the order is validity. So invalid addresses go to the top, then address family. So it's first IPV4 and then IPV6, and then by masked addresses, and then by the prefix length, and then by the IP itself, like unmasked addresses.
Jonathan Hall:So if I have an IPV four address and it's IPv six representation, they are neither equal nor sort next to each other.
Shay Nehmad:IPv4 is gonna be first.
Jonathan Hall:That's weird though that, like, they they aren't considered equal. And I and I suppose it depends on your your application whether you want them to be equal or not. Right?
Shay Nehmad:So the it's just that's the standard. There is no natural order to these things because you can't really compare IP before and IP before.
Jonathan Hall:Camels are big first. There
Shay Nehmad:are more addresses the address space is bigger. Yeah. So there is no, like, natural solution to this. So the way they solved it is, oh, I'll just go with this standard. Right.
Shay Nehmad:Next up, context aware dialing.
Jonathan Hall:Yeah. Another network one. So this is an interesting one because it's a problem that's been solved, but it's solved in a new way now. You can already dial. So first off, what is dialing?
Jonathan Hall:Dialing in the net package is when you create a new connection to a network service. Right? And so long ago, back in the early days ago, you had the net dot dial function, you could pass it, say, TCP and then an IP address colon 80 to to make an HTTP connection, for example. Then in Go 1.7 or whatever, they added context, so it needed the new dial context version, which takes a context as the first argument. So So if it takes too long to connect, you can cancel it.
Jonathan Hall:What they've added now is a new dialer, a context aware dialer, that gives you a third way to dial now using context, but it's more efficient. Mhmm. That's the that's I think that's it. I don't know what you can say more about it. That's fair.
Shay Nehmad:Another very simple one, example.com. So you have an HTTP test server that you can use for doing HTTP test, and the certificate already has example.com. It's like a thing you could use. And because of this, the the client doesn't trust responses from the real example.com. So if you're actually trying to use example.com, it doesn't work.
Shay Nehmad:So and obviously, you don't want to go to the real example.com in your test. Right? You're trying to use the HTTP test.
Jonathan Hall:Unless you work at example.com. Right?
Shay Nehmad:No. I mean yeah. But you wouldn't create your server with httptest.new server. Could just use http.server. So now the client redirects to the, you know, to the test server instead of the real example.com.
Shay Nehmad:Right. This is funnily enough. I don't understand how this came up or why did someone fix it, but it was, like, from an issue from 2019 and someone just got to it. You know? I'm trying
Jonathan Hall:to Co COVID area boredom.
Shay Nehmad:Yeah. I don't know. But, you know, it's a fix. It's it's Sure. Just strictly better.
Shay Nehmad:And I don't know who owns what what even is an example?
Jonathan Hall:Ayanna owns example.com.
Shay Nehmad:This domain is for use in documentation examples without needing permission. Avoid use in operations. Okay. So now we're avoiding use in operations.
Jonathan Hall:There it is.
Shay Nehmad:Alright. Homestretch.
Jonathan Hall:Alright. Next up, optimize func dot error f. You ever get confused if you use fumts dot error f or errors.new when you don't have I
Shay Nehmad:got CR comments on it in the past. Like, one of them is better from the other. This is not a bike shit to bike shit on anymore, basically.
Jonathan Hall:Yeah. So but but yeah. So basically, eras.new is was more efficient because it didn't do all this internal parsing of of things that didn't need we didn't need to be needed. You can stop worrying about that now. Disable.
Jonathan Hall:Linter. Foams dot airf is just as efficient or or very nearly so after this. So that's it.
Shay Nehmad:And another one that's super easy is optimized IO read all. Right?
Jonathan Hall:Yeah. This one's another 50% better, so a Go is now a 100 and 240% better overall, I think. Yeah. Basically, efficient internal allocation for buffers when doing iodit read all. That's it.
Shay Nehmad:I don't really understand how this wasn't very like, how did we only find this now? You know what I mean?
Jonathan Hall:Good question.
Shay Nehmad:It's very I would expect these things to come up just because people use IO Read all all the time. But also, I guess it's never in the hot path of anybody.
Jonathan Hall:I can read it in tests. I I use it occasionally in in production code, but, like, if it's something big, I I try to use streaming. Yes. So yeah. Yeah.
Shay Nehmad:I don't know. I guess that makes sense. One thing that's important, for this one is go read the commit message. It's just the best commit message I've ever seen, basically, for the read all. If you're if you're getting anything out of it, is is that.
Shay Nehmad:Which I know is a weird recommendation. Go read the the commit message, but it's just very, good. Multiple log handlers in log slog.
Jonathan Hall:I love this one. Gonna let me stop using some third party libraries that make me nervous sometimes. So, yeah, now you can basically chain multiple slog handlers together, and it will iterate through them and write to all of them. So if you need to send to to, I don't know, to standard output and to a file and to cloud provider or whatever, you know, you can do them all with the same same handler without using a third party library.
Shay Nehmad:What was the log product thing? We had someone on the show. Spark logs. Yeah. Spark logs.
Shay Nehmad:Yep. So you can have all of them. This is very useful. I I can can I tell you what's my use for the use case Yeah. With this one?
Shay Nehmad:So in locally, I want my logs to be, like, console logs and very pretty and, like you know what I mean? Like, colored. But always in production, I want single line JSON logging. And the reason I want single line JSON logging is I can use whatever cloud provider I'm using to read my logs at the moment to do, like, you know, where message equals a thing, and then the other field like, structured logging. I I don't need to explain the But sometimes I wanna run these, like, queries locally as well.
Shay Nehmad:You mentioned the goroutine leak profiling before. I did that with, like, asyncio runtime in Python, and I really wanted to use Angular Grind or AGrind to locally, you know, do some statistics on my how many Python threads, like, it's not threads, but whatever. How many Python things I have running at the at the time. So putting the normal log standard error, the the pretty, you know, colored whatever log Mhmm. And then putting a debug log into a rotating 50 megabyte JSON file on my hard disk that I can then query using like JQ or AGRIND is such a useful use case that I've done it to every single server that I'm seriously working on, where I'm actually using these logs.
Shay Nehmad:This makes it so much easier.
Jonathan Hall:Awesome. Very cool. I think we're at the homeward stretch here. Just a couple more to go. Right?
Shay Nehmad:Yeah. Very, very simple things. Talking about writing stuff to disk two, know, using them as a developer. Test artifacts, something we talked about in the past. You know, this is not something you couldn't do yourself.
Shay Nehmad:You what you would normally do is let's say you are running a test and you wanna write the results to the side, right, to to view it. I've written a lot of AI based, like, LLM based tests recently, Meaning not meaning the LLM is writing the test, although that is also coincidentally true. Meaning, inside the test, calling an LLM, which is inherently, like, you know, sort of instable sometimes, especially if the test is doing something that is, you know, generative in nature. So I wanna make sure that something gets generated and maybe I can hypothesize it has to contain the word, you know, Jonathan. I don't I can't promise the exact format.
Shay Nehmad:But sometimes I wanna save it to the side to just look at it. Right? What I would do is for every test, I would put like a slash t m p slash something, you know, every time I would jig up something different. Normally, would just use the local repo directory and then sometimes, oops, forget to like not commit it and then I would commit test artifacts and I have to rebase block. Now we have t dot artifact directory, t or b or f, whatever test you're using right now, just has an artifact directory, and you can dump things there, like your log file, which we just talked about a second ago, or, you know, various artifacts.
Shay Nehmad:And if you use minus artifacts, it's gonna just be there. And if you don't use artifacts, they're they are stored anyway, but they're deleted after the test completes Mhmm. Which I don't understand fully why would you ever
Jonathan Hall:do I suppose that way you can use it as a temp directory during tests, maybe? Or maybe it just exercises that code, make sure that you don't have permissions issues or something that you don't detect until you need them. I don't know.
Shay Nehmad:The only thing I I I could see it useful for is if you're, like, putting a debugger in the like, before the test ends. You know what I mean? And you're running the tests with a debugger, and then you're stopping, and then you're looking at the artifacts.
Jonathan Hall:It can also be helpful if your test crashes mid run that you have the artifacts, I suppose. I don't know.
Shay Nehmad:Oh, true. So anyway, TestArtifacts, a nicety that is very, very, very nice, for sure. And last one, all these awesome features that we've talked about. How will I remember to change all my codes to include all of them? And this is one twenty six, so ostensibly, there are 25 other versions that now I need to go and read through to understand what's going on.
Jonathan Hall:It's it's hopeless. There's nothing you could possibly do. Wait a minute. What's this one?
Shay Nehmad:Don't call it a comeback. It's GoFix. So you have GoVet that will report problems, and GoFix will modernize to use them in a safe way. You can just run GoFix, and it'll fix all the things. Do you remember the for var thing?
Shay Nehmad:I think it was one
Jonathan Hall:of the
Shay Nehmad:first things that we talked about on our show, like,
Jonathan Hall:few years ago.
Shay Nehmad:You can do go fix minus go var and it'll just fix all the things. But honestly, just run it in a pre commit on all the things and it'll just modernize your code because these are safe changes you can do. So, you know, instead of for in a slice, it'll use slice contains, for example. That's right. And you don't have to remember that, which is an important thing because I feel like a lot of code is getting generated right now, and the code that is generated is trained on old versions.
Shay Nehmad:But you do want to use these new better functions. And you can find a full list of analyzers if you're interested in that sort of thing. Like, you know, it's sort of like reading through the entire linter list of Golang CI Lint. And there are some obvious things here, you know, host and port and inlining things and min and max, like using the min and max functions, which I don't always remember because they came out after I started using, or omid zero, which we talked about on the show. You don't have to remember them.
Shay Nehmad:GoFix will just do it for you.
Jonathan Hall:Awesome. I
Shay Nehmad:can't believe we made it.
Jonathan Hall:This this is a big release. It feels different to me than some of the other big releases because it's it's a bunch of small changes. Like, none of these changes individually feel game changing, except maybe the very first one, the new new. That one I've been waiting for for a long time. But in aggregate, this feels like a big change.
Shay Nehmad:I think for specific people, oh, finally, I have secret mode. Oh, finally, I have vectorized operations I can start playing with. I've been working around it with c.
Jonathan Hall:Yeah.
Shay Nehmad:Or, you know, oh, finally, implement the hybrid public encryption, which I need. I also think, by the way, goroutine leak profiling is pretty big. That's If pretty you've used sync tests, but you're still having goroutine leaks in production, this basically solves it for you. Right?
Jonathan Hall:Oh, it helps you find it. Yeah.
Shay Nehmad:But like I said at the top, it's a big, big, big release that's made up of a lot of small, very good improvements, useful updates, and per and a lot of performance upgrades. If you just upgrade to Go one twenty six and you don't read anything, you just do it automatically, your program might get 60% faster just for no reason.
Jonathan Hall:Absolutely.
Shay Nehmad:If you're using a lot of, like, small objects and and you allocate heavy and you have a lot of CPU cores on your thing and you just upgrade to one twenty six, you might cut 50% of your program's overhead just by upgrading, which crazy good. I might be overstating it with the 60%, because I think it's 30% on the garbage collection time, and I hope your program isn't spending a 100% of your time on garbage collection, now that I'm saying it out loud. But you know what? Even a free 3%, Who doesn't want a free 3% cashback on their cloud costs? You know what I mean?
Jonathan Hall:I think we did it. We made it through in just an hour.
Shay Nehmad:Just one final question. When is it released? February.
Jonathan Hall:That's all we know right now. Should be
Shay Nehmad:What's your bet? February.
Jonathan Hall:I'm I'm always going for Valentine's Day, but I'll bet it won't be then. That's when I want it to come out. Yeah. For sure.
Shay Nehmad:Well, we'll see. We'll see when it comes out. I hope it's, like, February 1, because I can't wait this release to come out already and start playing around with all these people. Forward to it. That does it for 01/26.
Jonathan Hall:01/26. Woo hoo.
Shay Nehmad:Stick around after the Ed Break where we have an interesting interview with Arthur about their transformation, Tougo, and they're pretty interesting and unique company.
Jonathan Hall:So I had to run, so I'm doing the intermission by myself. Sorry for the extra long episode. Hope you thought it was interesting. A quick reminder to check us out online, cupco.dev, where you can find links to all past episodes, links show notes for this episode and all past episodes, transcripts, swag, all sorts of fun stuff. I'm not gonna ramble for too long, partly because I don't remember what I'm supposed to be saying.
Jonathan Hall:And, also, we have another interview coming up that Shai just talked about. So this is an extra double long episode, triple long episode, something like that. It's, 400 better than usual. Hope you enjoyed the show, and hope you enjoy the interview.
Shay Nehmad:Jonathan, what's your favorite diagram?
Jonathan Hall:A burndown chart? I don't know. If if took my favorite diagrams and, like, plotted them out, like and and your favorite diagrams and plotted them out, I don't know. How much overlap would there be?
Shay Nehmad:Yeah. Maybe we need to, like, find the characteristics we love and put, like, two circles, one here, one here, and then see the intersection, and there, all the good diagrams would be. Man, I wish there was someone who knew a lot about Ven that could help us on this call. Oh, hi, Asu.
Arthur Vaverko:That's a great intro. Hi. Hi, everyone. Thanks.
Jonathan Hall:If you were to plot out good intros and bad intros, would there be any overlap?
Arthur Vaverko:So, yeah, I'm here, the chief software architect at Venn. So just fire away any questions that you have. I'll be happy
Jonathan Hall:to get it. Awesome. I love that. Just straight to the point. My first question.
Jonathan Hall:Boom. Boom. Boom. What's Venn?
Arthur Vaverko:So right. Venn, basically, our mission is to, like, a Venn diagram, connect the community within the multifamily real estate business. So we provide an operating system for the multifamily real estate industry right from the first time when you book a visit, a tour to leasing apartment that you want and up for your third, fourth, or tenth renewal of your lease. So everything in one system. That's what we do.
Jonathan Hall:So multifamily real estate means, like, apartment complexes, I guess?
Arthur Vaverko:Exactly. Yeah.
Jonathan Hall:Okay. Does it go smaller than that or, like, duplexes? Or if I own six individual homes, does that count? Or So
Arthur Vaverko:not yet. Not yet. Okay. But but we are planning, and we have a lot of dreams. We are very focused on multifamily right now and solving those tackling those issues first.
Arthur Vaverko:And I hope we quickly beat other businesses.
Jonathan Hall:And you sell a SaaS?
Arthur Vaverko:It is. Yes.
Jonathan Hall:Okay. So now here's fusion question. When you talk about your software, what does multi tenancy mean in your local vocabulary?
Arthur Vaverko:Yeah. So the word tenant gets a lot of meanings and it's very And I'm all about clarity with terminology. I'm a I'm a freak about that. Okay. So words do matter and we call them residents on our part.
Jonathan Hall:There you go.
Arthur Vaverko:So we have the tenants, organizations, and residents on our business models.
Shay Nehmad:Very good. Right. So tenants residents are like people. Yeah. And tenants is like the multitenant database.
Arthur Vaverko:Exactly.
Shay Nehmad:Tenant. Yeah. Yeah. Yeah.
Arthur Vaverko:Got it. It it it gets confusing when you talk to customers. But down with the technical team, we do understand what tenancy means. We have and and we and we try to avoid that word altogether. So we have the hierarchies, organizations, properties, units, residents, lease contracts, no tenants.
Jonathan Hall:Right.
Shay Nehmad:Cool. And this is only in The US, or is it a, like, a global thing?
Arthur Vaverko:It's in The US. We do have some business in Israel, actually. Oh, right. Yeah. Most of it is in The US across the entire country, I guess.
Arthur Vaverko:It's we have customers in Texas, we have property managers, owners, New York.
Shay Nehmad:Yeah. I do wanna as someone who lived so I own a place in Israel, but most of my life I lived in rental properties. And now I live in a in a apartment complex that's like managed by like a company, which is a And first for I think maybe, Jonathan, since you were also on both both sides of the pond, like, European listeners who are listening to this right now and never, like, rented from in in America, this is a very different experience. You're, like, dealing with a corporation. It feels more like renting out like ski equipment than it is Yeah.
Shay Nehmad:How it felt at least for me in Israel, like talking to some old lady who has an apartment and is willing to let me like, you know, when you side go up to her and her son helps you sign the contract and whatever. Yeah. This is like a very business y thing that big companies buy a lot of homes. You know, it's even somewhat a political issue, but let's sidestep that because that's not the type of podcast we are. So you sell Vent to these companies, right, that own these apartment complexes so they could manage, like, all the tenants, all the payments, all the whatever.
Arthur Vaverko:Owners and operators.
Shay Nehmad:Yeah. You have, like one thing you have in in in these complexes that was crazy to me, you have a guy that's in charge of maintenance.
Arthur Vaverko:And and you have a concierge in some buildings on the front desk. It's like a a hotel where you have, like, not not not a five day stay, but few years.
Shay Nehmad:Yeah. So it's a it's a it's an operation. It's not just, like, a guy renting out the the place. So it makes sense that you'll need software to run this operation. Like payments and setting up like tours.
Shay Nehmad:People wanna come and see the the apartment before they rent it out. What sort of things does the does Venn solve for for these these people? So I assume, I don't know, scheduling maintenance calls is one and payments is another.
Arthur Vaverko:Yes. So when you look at this, you gotta look at the residents journey across the entire industry. So you said it first, you need to book a tour. It's very business y, and we try to make it personal. Today, with AI and LLMs, it's it makes it much more easier to to give you a personalized experience when you when we have the data and we can fit it to an LLM, and an LLM can can send you out a personalized invite for booking a tour or something like that.
Arthur Vaverko:So you do have the agents that can manage those different workflows for scheduling a tour. That's the first thing that you do in the journey. You you're a prospect. You want to schedule a tour. You're looking for an apartment.
Arthur Vaverko:And then after that, you need to apply for a unit. You need to fill out an application form. And the application form has a lot of data to fill in. And we try to make it as easy as possible. So we do have an app for that person, for the applicant that wants to apply and invite co applicants.
Arthur Vaverko:And then it gets to all those legacy stuff systems, the PMS systems that all those property management company uses where somewhere in their back end have, like, different I I don't want to do some name dropping, but there are a lot a lot of accounting that we integrate with and need to sync the data into, so all the bookkeeping will be in sync. And I
Shay Nehmad:will say that compared to by the way, compared to, you know, various SaaS software you can develop, Like messing up this is like a b to b SaaS software play, but messing up a little bit of data here or there or, you know, dropping a thing is super detrimental to someone. I can just imagine like, oh, we built a data pipeline, we built it using the Node. Js. Oh, we had a the the event loop was blocked, so we missed like 50 payments out of 10,000. Not too bad.
Shay Nehmad:The other side, it's like 50 people who didn't pay their rent this month and are like super nervous because you know what's going on with them. This is actually like it's it's housing. It's, like, super important.
Arthur Vaverko:So from the technical perspective, durability is critical on that point. Yeah. Have to be successful eventually in sync. So they eventually Of
Shay Nehmad:the building or of the software.
Arthur Vaverko:The durability of the building is critical as well, but it's not on our on our end. But yeah. So you're looking at the journey. So you're a prospect, then you're an applicant. And then if you decide to move in and you are unapproved and you pass the screening, there are a lot of screening tests to do with with different third parties that you integrate with and send, like, KYC documents and and and income verification and so on.
Arthur Vaverko:And then you become a resident. And this main section when you where where you're a resident, the leaving part of your journey, is something where we want to bring the extra mile. We have social modules that will allow you to interact with the neighbors, interest groups, events. We have amenities module where I don't know if you know, but in in some of the multifamily apartment buildings, the triple a buildings, they have amenities like the rooftop pool and barbecue grills, and some might have, like, a hydrophonic garden. And you can actually book them for your purposes, like to do a birthday or just to go to the pool with your friends.
Arthur Vaverko:And managing those booking schedule, then some actually have golf simulator. So when you schedule the golf simulator, for instance, some are paid, some are complimentary, but you do need to manage the scheduling as well. So this is, again, another sub business that we're solving as part of our software. So there's a lot to do on our end. And then there's the renewal part, where you want to renew your lease for another year or to move out, and there's a move out checklist that you need to fill in, an inspection checklist, and so on.
Arthur Vaverko:So a lot to cover the entire journey. And we provide all of the features during that time, as well as smart access integration, a lot a lot of integration. And we try to make a very simple interface for the resident and for the property management team, for the on-site team to have to see everything in one place like a resident management system. So that's cool. A lot to do.
Arthur Vaverko:Yeah.
Shay Nehmad:It's a lot of operation to to operate.
Jonathan Hall:I'm I'm curious. I I expect this is an industry that's that most people never think about. People who live in apartment complexes are gonna probably download an app, and they'll they'll see some of this. But they're not it's not something that's, like, forefront. People's minds like, you know, car brands and stuff.
Jonathan Hall:What kind of competition exists? Is this a wide open field? Are you the only player? Or are there like thousands of players out there, you're all competing?
Arthur Vaverko:There are not thousands of players. There are some competition. Of course, the PMS companies, the property management companies that became the backbone of the industry where you manage all your billing and accounting are trying to meet us from that site, meaning they're trying to create additional applications for handling lease applications, applicants, and handling amenities for the concierge and so on. There are new companies, like, there's a company called Elise that that's started with prospect leasing and trying as well to to meet from that forefront to to meet the entire journey. So there are not thousands of competitors.
Arthur Vaverko:I would say they're like less than a 100, I think. But the main pain point today, I think, is what we call app fatigue. A common resident in the multifamily apartment, like, have seven or eight different applications. Mhmm. One for opening the lock, one for the intercom, one for the packages, one for paying the rent, another one for whatever.
Arthur Vaverko:And this is something that we strive to eliminate. We want the resident to have a single application, and we want the on-site team to have a single back office system where he or she will be able to see everything about the resident and communicate with them safely and and get the insights from there. Awesome.
Jonathan Hall:So let's get to the the point, the reason you're here. Oh, yeah. We're not here to talk about apartments. Yeah. Although that's interesting.
Jonathan Hall:We're here to talk about Go. Mhmm. How does Go fit into this whole picture?
Arthur Vaverko:Right. So when we start when I started at Ven around '3 almost three and a half years ago, I met the company with a very very small start up, like eight developers. And they've been working over very trendy technology, I might say. It was, like, a 24 to 30 microservices written in Node. Js, 600 or so Lambda functions on AWS, and SNS topic that streams events and choreography between those Lambda functions where one invoke the other and so on.
Arthur Vaverko:And then you get an email about a recursive function invocation. So there was a lot of overhead with managing the infrastructure, and they and and and we were fighting against the infrastructure and against the velocity of of creating features, creating value. Because with with the existing with with node technology and microservices, we have the technical depth tax where in node every version is a breaking change. I guess you know that. And every library version that you update is a breaking change as well.
Arthur Vaverko:Nevertheless, it might be a minor, it might be a patch, but it will still break the code. So a lot of this, and we were struggling with creating actual value, actual features, and the velocity was very very low for development. So the first thing that that I understood is it it clicked with me with Melvin Conway, the Conway's law. Mhmm. Law that says that an organization, broadly speaking, will eventually create a system that reflects the communication structure of that organization.
Arthur Vaverko:And at then, we were actually fighting that law. Because the communication structure was, you know, like a hallway talk, informal. We were closing stuff during coffee meetings at the kitchen, while the actual technical deployment pipeline was very formal, like 24 CI pipelines and strict contracts between microservices and control drift and everything. So it became very, very hard to maintain. One thing that we tried to do first is to try and actually converge all those microservices into a single monolith.
Arthur Vaverko:But we I didn't want to create what we call in the industry a big ball of mud. Right. So another great architect, Simon Brown, had a very good talk about something he calls the modular monolith. This is what we actually started to build. We took all those microservices and and we merged them into a single monolith where it is split into different modules.
Arthur Vaverko:So the modules are not technical. It's not like you have the the API layer, the data access layer, the business objects layer. You have different modules where you have business vertical for each. So we have a social module. We have a resident module, and each module is very independent.
Arthur Vaverko:Each module will have its own data access layer, which should be internal only for that module. So it's like having microservices, but within from that monolith, we intended to create multiple containers for different workloads. So the API would be built from this code and shipped to, like, whatever service. It could be ECS, Kubernetes. We use Kubernetes here.
Arthur Vaverko:And this was the plan. And we actually started it. I I wanted to start as frictionless as possible. And because the team was really professional, with Node. Js and JavaScript, we were using that to create the modular monolith.
Arthur Vaverko:And very, very quickly, I mean, around a month or so into the project, I got a pull request. And that pull request changed the entire purse my entire perception. I I got the pull request where I mean, it was it was something it's not intentional. A developer just used an internal internal package from a different module, like, the resident. It used some helper function from the data access layer of the resident module in the social module because it needed to map something about the resident.
Arthur Vaverko:Mhmm. And it just clicked with me that there are no tools in Node. Js to be able to encapsulate in internals, internal implementation logic. Mhmm. And it might work for microservices where you have no ability to import it because it's a whole different repository or package.
Arthur Vaverko:Well, I might argue that if if you really want to, you're still able to do that. But in other languages, not specifically Go, but we'll get to it Mhmm. Have the concept of an internal package where you can create a module and say in that module, I want to encapsulate part of the logic so that other modules will not be able to use it, other packages.
Jonathan Hall:So I'm I'm not a I'm not a JavaScript nerd by any chance. Right? Yeah. By any means. But can't you, like, un like, either export or not export symbols in JavaScript and and use closures and stuff like that to give you at least some of what you're talking about?
Arthur Vaverko:So you can. You will export it. But as soon as you export it, it's available to any level of the hierarchy.
Jonathan Hall:Oh, sure. Yeah.
Arthur Vaverko:And you you need to export it to your parent module because it needs to be accessed there. But then if it's exported, it doesn't Yeah.
Jonathan Hall:You import it. Yeah. Right.
Arthur Vaverko:Everybody can use it. In c sharp, you have an internal namespace, for instance. In Java, I think there is something similar.
Shay Nehmad:Yeah. Like protected or something?
Arthur Vaverko:Yeah. Something like that. Restricted. I I don't really remember the name
Shay Nehmad:it's interesting because you mentioned you're the, like chief architect at VEN. So like these access level things are are a tool that's useful for you to what? To make sure that developers are not exit like, it's sort of creating guardrails around, okay, this is what I want to be internal, this is what I want to be exposed when you're developing, because you can't review every single pull request, so these are like the the the sort of tools or guardrails or permissions, I guess, that allow you to shape the architecture more strongly. Or I guess the thing that was missing more more correctly from JavaScript.
Arthur Vaverko:Exactly. You can put as many linters as you want. An ESLint rule will eventually be disabled if the patch is urgent enough. But Or
Shay Nehmad:if Cloud Code is lazy enough. Like, I solved it. DS, ignore it.
Arthur Vaverko:Yeah. But it I mean, it takes a comment to ignore a rule. That's it. And what we what what I was looking for is something that will enforce architectural constraints to the point where a developer will not be able to override them. So this ability is priceless for me as an architect, but I think it is priceless for any team of developers to trying to build good software and maintainable software.
Arthur Vaverko:And I think that Go really, really shines in that prospect because, first of all, you have the ability to internalize and encapsulate logic so easily and so simply just by naming a package internal. And that's it. Everything inside is not visible to anyone besides that parent module that holds the internal package. So easy. If you try to do this is what in the industry we call the pit of success.
Arthur Vaverko:You're throwing the developer to the pit of success. Making the the the the the good things easy and the bad things hard. This is what I'm there for, and trying to figure out what would be the best approach for that. So that what breaks the camel toe the camel back. Sorry.
Arthur Vaverko:And this is the PR that broke the camel's back. And I took it upon myself to take what we wrote in a month and rewrite it to Go. And I chose Go just because I was really familiarizing myself with Go at the point at that point. I was moving from c sharp to Go. And I just really fell in love with the simplicity of the language and and the talk of Rob Pike how simplicity is really complicated.
Arthur Vaverko:And that got me hooked with go. So the the magic where you can just onboard yourself into a language in two weeks, and then we'll talk about it and how the metrics change in our company as well with new devs onboarding so quickly to Go. The simplicity of the language is amazing. And the there are three main points that they think got us as a team to choose Go because it was not only my decision. What we actually did is we took a developer to use and and write a new feature using Go instead of using Node.
Arthur Vaverko:Js. And we gave the developer, like, two weeks of onboarding and training sessions and so on. After two weeks, the experience of the developer and the code that we got was that it's it's just a joyful experience to code and and and work with that language. So this is what made the entire team agree that we will switch tech stack from Node. Js to Go on our back end.
Arthur Vaverko:I really, really pray for the time where you can use Go on the front end as well for web development, but I don't think it will ever happen.
Jonathan Hall:I mean, I use Go for web development, but No. I It's not very easy. Yep. But it can be done.
Shay Nehmad:It's not a joyful experience as it is, like, writing back end services for sure. When you're switching tech stacks like that, other than the joy of developing, sometimes there are other, you know, reasons like, oh, we're using one of them, of course, being, oh, the frontend is in JavaScript, so if the backend is in JavaScript, they can easily share types. So, you know, the API, like, management thing is easier because it's the same struct in the in the in front of and in the back of any API request. Like, it's gonna be the same JSON if you're using normal REST or whatever. Maybe there's a specific library that's very easy to use.
Shay Nehmad:And clearly, all the engineers already know JavaScript as well. So I wanna play devil's advocate here a little bit and and try to like because this is a big you know, y'all are a business, and switching tech stacks is a big cost. Now, I I obviously, here on the call, we're all biased, and we're all gonna say developing in Go is is probably I don't know. Maybe Jonathan is doesn't find any joy in anything anymore, but I I think programming he's in yeah. I think I think it's it likes programming in Go compared to other languages generally.
Shay Nehmad:Like, if I gave you two contracts, consulting contracts, and I have you pick, Jonathan, same sort of domain, same payment, one in Go, one in, I don't know, some random high level language.
Jonathan Hall:I wouldn't even consider the other one.
Shay Nehmad:Yeah. You know what I mean? So so I I don't think we're we're trying to dismantle the core of your argument that it's more joyful or that it's more productive, but there are, like, other costs to switching a company's tech stack. Right? Yes.
Shay Nehmad:How did you handle those, like, objections?
Arthur Vaverko:Oh, yeah. That was a that was a very quick meeting with Numbers. I mean, it was very easy with Node. Js because the maintenance tax was really huge. We paid maintenance tax over and over again when we needed to, like you know, you have a microservice that you wrote in Go, and you upload it you wrote in Node.
Arthur Vaverko:Js, I'm sorry, and you upload it to AWS. So there's a Lambda there. And you wrote it at Node, I don't know, Node 16 or so. And, I mean, a year after you get a note from AWS, Listen, we're we're deprecating note 16. You need to upgrade.
Arthur Vaverko:And the upgrade process is something that takes time. Not only upgrading the tech stack, but upgrading libraries that we use in Node. Js broke the code as well. And this is something that I think very, very familiar to any JavaScript programmer where you import a single library, the node module black hole and everything. So that maintenance tax was the golden ticket to get the switch because a lot of the developers were heavily invested in in in maintaining services and maintaining the technical debt that was created.
Arthur Vaverko:Go, with the go one promise, eliminated that need. I mean, we started from Go one nineteen. We're at one twenty five. We just I mean, I just upgraded to one twenty five. It was released, and it was just easy.
Arthur Vaverko:I mean, you just switch, and that's it. You don't need to do anything. And I think that that this this mindset also, like, drills down to the minds of to the minds of all of the community developers because all of the libraries adhere to the same principle as well. Most libraries written in Go are very, very strict with versioning and very, very strict with with changing and breaking changes, doing breaking changes on their major version. So when your standard library and when the ecosystem behaves that way, you just gay get a maintainable software where you can just develop new features and you don't need to go back and do technical debt maintenance all over again.
Shay Nehmad:So your your, like, case was, okay, we're gonna put down, like you're talking, like, real estate. Right? We're gonna put a down payment on this migration. It's gonna cost us, obviously, not it's not gonna be easy to to change languages.
Jonathan Hall:Mhmm.
Shay Nehmad:But after a while, the the look at all these, like, tech debt management things we're gonna have to do if we stay on Node. Js, just, like, integrate the number be you know, integrate this to see how much effort we'll have Yeah. Beneath that graph versus how little we'll have beneath the go graph, and just wait a little while and it's gonna be worth it. How long, like, was that cutoff point? Was it like, oh, in a year, we're gonna re recoup our costs, or was it like, listen, it's a no brainer.
Shay Nehmad:In two weeks, we're gonna get all the costs back because of the node 16, you know, deprecation thing.
Arthur Vaverko:So I was I was really shaking at that meeting, trying to convince my boss to to give the time for that project because I know from experience when you start such a migration project of technical stack, it never ends. I mean, you start it, and and at the end, you look back after two years, three years, and you see this legacy code behind. No one wants to touch it, but you still have to maintain it. And now you have the new thing that everybody wants to use. So you you're gonna end up with both.
Arthur Vaverko:But I think that I don't know really if if Go was the reason, but it really made it easy. We managed in, I mean, in half a year to transfer, like, the Pareto, the 80% of our workloads to the new infrastructure using Go. And the the the 20% that that's left require 80% of the work. But I think that after a year or so or a year and a half, we've managed to transfer almost everything. We do have some still running Lambda legacy services, but they're really, really negligible, and we we don't really maintain them.
Arthur Vaverko:If something went wrong, we would just convert them to our infrastructure. So, yeah, it it proved as a success story, the the migration to the tech stock. And I was surprised because I never saw an actual migration project like that ends up in this with those results. It was really amazing to see. And what was most amazing to me is how quickly we got all of the development team.
Arthur Vaverko:Now we're around 33 developers and and counting. How easy it is for a new developer to onboard to the project. Both because Go is so simple to learn and because the architecture that that Go really in in some way, forces you to to simplify makes the code base very, simple to learn. Yeah. It was it was a easy easy sell.
Jonathan Hall:Cool. And any regrets? Any anything maybe regrets is too strong, but, like, were there any surprises, any negative aspects to this switch to go?
Arthur Vaverko:I didn't think about that. I never thought about that. You know?
Jonathan Hall:It's just all rainbows and unicorns.
Arthur Vaverko:Yeah. It it it's not. It's not. What's harder?
Shay Nehmad:They had to evict all front end developers from the properties themselves. They're like, you have to write and go to live in one of the prop no.
Jonathan Hall:I'm just kidding.
Arthur Vaverko:You know, I I can't find anything that made made it harder. I mean, I guess there was there were some developers that were struggling with with specifically with JSON, I think. There was a bit of a struggle. It's a it's a weak point in Go because of the static typing, so it's a trade off that you pay. Working with JSON is not really fun, but it's safe.
Arthur Vaverko:I'll give you that. So I think I I if I if I had to place a finger on the thing that was the hardest or the biggest surprise where I heard most cries for give us Node. Js back or JavaScript, we want JavaScript back, was working with JSON. We never switched to JSON v two yet. I never got the chance to I'm to that
Jonathan Hall:next month once 1.26 is released. So,
Arthur Vaverko:yeah, that that that's gotta be it.
Shay Nehmad:And I do wanna you know, it's fun to try and poke holes in the success stories on the, so like don't feel like we're we're, you know, attacking you, but it's just trying to understand how it really went down. The team at Venn, I assume is like has some people that do back end, have and have some people that do front end. Did you have, like, full stack developers as, like, some these sorts of developers maybe more early on that worked across the entire stack? Like, you would give them a feature and they would implement both the front end and the back end?
Arthur Vaverko:So that might surprise you, but all of the developers at Venn are full stack developers. All of us. We do front end and back end. But personally, I think that no developer is really full stack developer. You do have a preferred side.
Arthur Vaverko:It's like you Mhmm. Either left handed or right handed. You can't be both. Ambidextrous just unicorns. This is not something that that really find.
Arthur Vaverko:So we do have developers that became more back end oriented. And I think that the switch to go make it made it more obvious and more compelling for them to be more proficient on back end develop. Go is really really suitable for back end development. And I mean, static typing, the the no time build time. I mean, you just press everything's ready for you.
Arthur Vaverko:The package system, the the the the the backwards compatibility promise, everything's just placed right to the hands of back end developers. And and they still use JavaScript. We still use JavaScript for front end. We do not share. We can't share types anymore.
Arthur Vaverko:And I think this is actually a good thing. This is not a bad thing. You should not share types between your front end and back end. Eventually, what this will do, you will leak business logic to your front end applications. And then when you find another front end app that you need to create, you will find yourself reimplementing the same business logic that you already did in in a different app.
Arthur Vaverko:So when you have multiple contents like we do, we have a front end for an applicant. We have a front end for the resident. We have front end for the on-site team. We actually have a front end which is not really a front end because it's an LLM, but we treat it as a front end. It's a new and and and the technical landscape, LLMs became the new front end.
Arthur Vaverko:This is how you interact with back end systems today, like agents. So an agent is just another API, another front end that uses the API. So we create an API that can serve all of them. We create back end for front end BFFs. And all of those BFFs, everything lives inside that modular monolith, which is very, very nice.
Arthur Vaverko:So we converted all of those services into that monolith.
Shay Nehmad:Yeah. From from an architectural perspective, I think the the theme of 2025 and 2026 is definitely, okay. Simplify. I don't know what we all huffed in 2020 that we thought every single developer needs 15 Kubernetes clusters for their to do app, but now it's like, okay. You need one binary, and it better be small.
Arthur Vaverko:Yes. And and there's actually a great article by by someone. I I don't know if the I won't find it, but there was an article that I read where the main quote was, you're not Google. You're not Google. You're not Amazon.
Arthur Vaverko:You don't have Google problems. You don't have Amazon problems. You do not need Kafka. You're not LinkedIn. You don't have those amounts of data to to transfer.
Arthur Vaverko:Just just don't follow the trend and and simplify. And this is exactly what we did. We removed the trends. We simplified. We actually no longer use orchestration.
Arthur Vaverko:One of the most trendy trendy patterns that we see around or we saw, I don't know if that's still the case, is using choreography. It's where one service sends out an event to like a service bus, and there are a lot of other services that picks up that event and do their logic and send other event. There's saga patterns and so on. Some use Kafka, some use SNS, or different service buses like RabbitMQ and so on. But that pattern is something that really shines when you have a big development team.
Arthur Vaverko:Very formal communication channels. And when you have a need to formalize the way that you communicate with another service, and you want to be as loosely coupled as possible from that service. But when you have a team of eight developers or even 30 developers, everybody fits in in the same room, you don't really need this. What you want is ease of maintenance, ease of debugging. So we switch to a pattern called choreography.
Arthur Vaverko:You have a main, like, a conductor, a main workflow that holds the logic of what needs to happen. And we use and actually, we use great infrastructure for that called Temporal to orchestrate all of our workflows, and it's actually where you can go. It started at it was a project in Uber called Cadence.
Shay Nehmad:As many of these, software pretty good, well tasted software architect projects. As, Jonathan learned on one of the latest episodes where he had the guys talk about dependency injection and they changed his mind. Oh,
Arthur Vaverko:it's great. We actually use no dependency injection. I heard that episode, and and I was thrilled of hearing that I'm not crazy. Yeah.
Shay Nehmad:I think that episode, whether you really like using dependency injection, whether you don't like using dependency injection, whether you like using a library, or whether you don't like using a library, everybody came out of that interview pretty happy because their their opinion was confirmed.
Jonathan Hall:Hang hang on. Are are you saying no dependency injection? You mean no dependency injection framework?
Arthur Vaverko:No framework. We do use dependency injection.
Jonathan Hall:Yeah. I I I would kill myself before I didn't use dependency injection.
Shay Nehmad:What do you mean? Just initialize the database in every class that needs it.
Arthur Vaverko:Yeah. Yes. Please do.
Shay Nehmad:New logger.
Arthur Vaverko:New logger. Or pass it in the context.
Shay Nehmad:Yeah. Yeah. Just put everything in context. That's why it's a dict of any to any. If if Rob didn't want us to use context, he shouldn't have created a library, you know.
Jonathan Hall:Wait a minute. Alright, Arthur. Well, it's it's been a lot of fun learning about Venn and hearing your success and all about rainbows and unicorns. Tell us tell us how we can well, I was gonna say how we can, like, follow Venn. Although, I suppose it doesn't matter to most of us unless we own an apartment complex.
Jonathan Hall:But what do you wanna plug here? Well, let's just do it that way. I'll leave it open. Do you wanna plug Venn? Do you wanna plug a personal blog?
Jonathan Hall:Anything at all.
Arthur Vaverko:I wanna say that Venn is really Venn just completed the round c funding tour and we raised 52 millions.
Jonathan Hall:Okay.
Shay Nehmad:Congratulations.
Arthur Vaverko:Thank you. And we're growing. We're looking for developers that want to join our team and do great stuff here. We're touching everything from AI to data pipelining, API front end. So if you really like touching everything and you let go, just you can DM me.
Arthur Vaverko:I'm at LinkedIn. I'm on Medium. And links to your careers page, Vendat City, and your LinkedIn are in the show notes.
Shay Nehmad:If you're listening to this and you're like, oh, how the hell do I spell vaverko anyway? Oh. Don't worry about it. The the link is in the show notes.
Arthur Vaverko:You in in English, it's actually very simple, but try spelling it in Hebrew. You you just Right. For the convenience. I mean, right. Thank you so much, guys, for having me.
Arthur Vaverko:I love your work. Really do.
Shay Nehmad:Thank you. Thank you. And I think we're we're christening a new it's a new year, new Us.
Jonathan Hall:New. Yeah.
Shay Nehmad:If this is your first episode or you haven't followed, every year we did what we call the stumper question, which is like the same question to all our interviewees that year. First year, it was what if you had to remove a feature from Go, what would you remove? And if you had to add a feature from another language, what would you steal from another language into Go? And we sorta ended up with all the features of all the languages on both lists. So Yeah.
Shay Nehmad:It all canceled out at the end. Second year, what was it? I don't even remember.
Jonathan Hall:It was What was the biggest surprise learning Go
Shay Nehmad:or something like How long have you been doing Go, and what was the biggest surprise or the most difficult thing? This year, I really like like, 2025, we had who who was the person who influenced your Go journey the most, so it was very, like, personal and whatever. This year, we're going back to a technical question. Right? Yeah.
Shay Nehmad:Kind of. Something that we can actually learn from. Jonathan, Arthur, please christen this new stumper question.
Jonathan Hall:Yay, Arthur. I hope you're ready. Drumroll, please. Arthur, what is your favorite Go library that's not in the standard library? And what?
Arthur Vaverko:Okay. So third party. I must say that GQL gen gonna be the best one from '99 design. This is a library writing GraphQL API with schema first approach. I mean, they have the entire ecosystem just they nailed it.
Arthur Vaverko:They created a great developer experience where you can write your GraphQL schema. So you have an API schema where your front end and your back end share them, And and the front can develop a client, a type safe client from that schema. And the back end, using GQL gen, you develop a type safe interface layer where you know exactly what you need to generate. And all the types are generated for you as well. So kudos to them.
Arthur Vaverko:They're doing great work, and it's amazing. Easy choice.
Shay Nehmad:Very cool. Average zero. I'm opening that.
Arthur Vaverko:My god.
Shay Nehmad:Gmail dot com. This is what I'm seeing, but the Go report is a plus. So may I don't know. Maybe they don't believe testing is is cool. But I I really like schema first approaches to to development anyway.
Shay Nehmad:Like, I'm the number one advocate of write your API contract first and then have something generate the rest. The more deterministic the generation, the better. Now you can honestly, if your API is really, really, really well documented and you have a DB, I feel like a lot of people are experimenting right now with writing the API and whatever, you know, REST or GQL or Proto Buff or whatever, then using these exact type of generators to generate all the boilerplate, then using, like, Cloud Code or whatever to generate the the implementation just to have a really well documented schema first API. So I I I haven't used GraphQL for Go yet, but I I can really understand why you're upvoting this specific one. Yep.
Shay Nehmad:Link in the show notes, of course.
Jonathan Hall:Same. I haven't used GraphQL at that level, but I definitely appreciate API generators for gRPC, for OpenAPI, for whatever.
Arthur Vaverko:No. You really should try GraphQL. I mean, I was a REST advocate for very long time. It's simple. It's nice.
Arthur Vaverko:But the evolution of open API schema and the way that you need to be very it's very verbose. It's very hard to read. And the schema has I mean, I I think it has beside being a source of truth, it's like a documentation for the developer to know what's there and what's not. It's gotten really, really easy hard and messy to work with open API and share models between GraphQL is very, very simple. I really encourage you to give it a try.
Jonathan Hall:I could rant for hours about REST. I'm not sure that GraphQL is is the right solution for many problems. I I I certainly see it's the right solution for some. But, yeah, that would be a fun conversation for another day.
Shay Nehmad:Yeah. Now we also have to when talking about all these things, we now, unfortunately, have to remember JSON RPC as well, because MCP is on JSON RPC. So so for some reason, that has resurfaced again. But generally, no matter like, pick your poison, whatever language you choose to define your your contract, having a good generator for that contract is always Absolutely. So so shout out to GQL Jen.
Shay Nehmad:Link in the show notes as well. Arthur, thanks so much for coming on our show.
Arthur Vaverko:Thanks for having
Shay Nehmad:a blast.
Arthur Vaverko:I love your work. Thank you.
Jonathan Hall:Thank you. Until next time.
Arthur Vaverko:Okay.
Shay Nehmad:Program exited. Program exited. Goodbye.
Creators and Guests
