🐋🐋 Two Orcas swimming in a pod(cast); FDs, Aliases, and more with Maxim Vovshin
This show is supported by you. Stick around till the outbreak to hear more.
Maxim Vovshin:This a Cup o' Go for a month, September 26, 2024. Keep up to date with the important happenings, in Go community in about 15 minutes per week. I'm Maxim.
Shay Nehmad:And I'm Shay. Hello, Maxim. Welcome to our show.
Maxim Vovshin:Hey. Nice to be here. Thank you for having me.
Shay Nehmad:Yeah. This time, it's the it's the Orca star hour. 2 people from Orca Security giving you Go news. And my my secret plan of, joining Orca a year ago and converting all of them to Go is slowly going according to plan.
Maxim Vovshin:It's it's progressing. Although I'm not doing any Go at the moment, but maybe in the future. For for now, I'm the the layman taking the position of the layman. Yeah.
Shay Nehmad:That's totally fair. So we have a couple of proposals to discuss, one that's likely accepted, one that's declined. Then we have 2 pretty hardcore blog posts to to dig through. So let's, jump into it. What do you think about the web?
Shay Nehmad:Do you like installing applications or using applications from Chrome? Let's say Jira. Do you prefer to install the Jira application or use it from the web?
Maxim Vovshin:No. Web web. I'll I'll go with web.
Shay Nehmad:So everything we do has started to happen from, you know, Chrome web applications. And there's a new new well, I don't know if it's new, new ish code format called web assembly, usually colloquially known as WASM, which is all about compiling your Go or Rust code or C code or whatever to things that can run on the web. That's the goal. It it doesn't make any web specific assumptions, but basically, the goal is to compile things so they can run on your browser even if they're near native performance and use, like, your hardware and your GPU and and stuff like that.
Maxim Vovshin:The point is to not use JavaScript. Right?
Shay Nehmad:Yeah. Exactly. People hate the JavaScript, and all the problems that come with it. And it's basically like a low level thing. Right?
Shay Nehmad:It's you won't write WebAssembly by hand in the same way you won't, write assembler by hand today. Like, unless you have a very super specific high performance position independent code on, I don't know, embedded thing somewhere. Yeah. You're gonna write in a high level language. It's gonna get compiled to Assembler.
Shay Nehmad:So WebAssembly is the same. It just runs in Chrome, basically. It's not super common to see people writing stuff and compiling them to WebAssembly. But as time, goes on, I've seen more and more real projects and not just the toy projects. Things like spreadsheet software.
Shay Nehmad:You know, Google Sheets is not written in WebAssembly, but there are competitors in WebAssembly that are much faster, and they can run Python in in the cells because Python can compile to WebAssembly, like the interpret, etcetera, etcetera. So some pretty cool stuff. There's a proposal in the final comment period about, WebAssembly, and I thought WebAssembly was pretty esoteric, but from the number of comments and details on this, proposal, I guess I'm I'm I'm I, like, I just don't know what's going on, basically. There's a proposal, have Go compile to WASM 32 a lot more smoothly. So Go already compiles to WASM, but WASM is not just a one thing.
Shay Nehmad:There are, like, different flavors of WASM in the same way you have different architecture flavors, 32 bit, 64 bit, etcetera, etcetera. Go is working on WASM 32 compatibility. It Go has been supporting WASM since 2019.
Maxim Vovshin:It had only, like, 64 bit or,
Shay Nehmad:Yeah. The most of the web assembly support was for 64 bit and, you know, most of the hardware today is 64 bit anyway, so that makes sense. And Firefox and Chrome have support for 64 bit memory and etcetera etcetera. On the server side, all hosts use 32 bit architecture for WASM. If you are into WebAssembly, maybe you're working on TinyGo or or you just want to nerd out about byte alignment and how the structs line up and all these, like, annoying problems, This is a really, really interesting, proposal to try to dig through.
Maxim Vovshin:It's so proposal is accepted or,
Shay Nehmad:Not yet. Not yet. It's in the final comment period, and it's likely accept. I think this is the sort of proposals that there's no debate whether you should do it or not do it. It's just it's more like a code review where everybody already agreed it's gonna happen, and it's just about is the, is the way I'm proposing to do it correct.
Maxim Vovshin:So, so help me understand, like, what, what do we gain from it? We will be able to, convert go to 32 bit WASM, which we weren't able before? Or
Shay Nehmad:So the, like, the best explanation I can give, Go has support for, WASM, but, it doesn't really support 32 bit, WASM. And that's relevant because it uses less memory and, you know, the structure will be smaller. And this is the architecture that's relevant for a few WASM hosts, especially on the back end. Firefox and Chrome, it's no problem, but if you're running web assembly on the back end, which makes sense, let's say, you have a small function inside the node service that you want to write in WASM. If your host is 32 bit, then you wanna use 32 bit web assembly to do so.
Shay Nehmad:And, obviously, you want to write it in Go because Go is awesome. The proposal will sort of match up the, 32 bit, types, that you can pass from and to functions. So it'll work, and it will also it also clearly lays out what won't work. So you can't do, for example, a generic channel in in WASM 32, or you can't do a generic map, or you can't pass a function, like, these sort of more complicated types won't really work, which makes sense. Right?
Shay Nehmad:You're compiling to a very specific subset, which is smaller, etcetera, etcetera. So it's pretty cool. There's a lot of future work as well in the WebAssembly, future. There's improvements to the API and the type system, and a lot of things that will improve there. So once that target is available, then Go could improve its compilation to work with that.
Shay Nehmad:It's sort of an evolving process because, again, it's pretty new. It was only introduced in, 2019. So pretty cool. I I don't know. Wazam and all this technology is pretty cool, but I have just had a lot of chances to play around with it, especially in Go and I've done Go.
Shay Nehmad:So this is a likely accept. If it's relevant for you, Go comment. How about you bring the energy down with a declined proposal? Tell us what's not gonna go into Go, Maxim.
Maxim Vovshin:Yes. Not not all of them go in. Some of them get declined like this one, the file descriptors, proposal. Basically, the proposal was to introduce a new API, to understand if file descriptor an internal file descriptor is used or not by by Go. So the issue was that, there is a CVE that was found and someone wanted to patch that, that, issue on their system.
Maxim Vovshin:So they the way they have to patch it is they go through all of the file descriptors they have in their container and they just close all the file descriptors. The issue is that there is a file descriptor and, it's, it's called people file descriptor. When you try to close it, go runtime just panics because for some reason it's really important. I'm not sure why. Dig into it.
Maxim Vovshin:I dig a bit more into it, but the important thing is that it just panics. So they wanted to have like a new API and that will allow us to understand if we can close the script or not. They commented some more about it, and then someone just said, why don't you check if it's an equal descriptor or not? And if it's not, then just don't try to close it.
Shay Nehmad:Wait. So what is an EPAL descriptor? Is this specific type of file descriptor?
Maxim Vovshin:From what I understood, it it allows you to check, like, it provides a way to to monitor and monitor multiple file descriptors to to see if they are ready for some IO operations like reading or writing.
Shay Nehmad:Oh, so if you skip the EPUL descriptors if you close all the file descriptors but not the EPUL descriptors for the Go runtime, then it won't panic.
Maxim Vovshin:It won't panic. Yeah. It can close, like, the regular descriptors but once you touch an e poll then your program is doomed.
Shay Nehmad:That feels like a bit of a of a workaround though. Like, what will happen if, you know, there are there are future versions of Go where they introduce other types of file descriptors that again will panic? What's the sort of a trade off here? Why was it declined?
Maxim Vovshin:One one of the things your question is is good and and they addressed it, like, some in in a in a way. They said that they wanted to they want to have this, interface because they wanted to, like, protect themselves in the future from new types of descriptors. But looks like the I don't know. Who who checks those, like, proposals? Who decides if it, like, if it's declined or not?
Shay Nehmad:So there's a Go team on, Google has a proposal, weekly where they review the proposals and, just decide, basically.
Maxim Vovshin:Yeah. Okay. Sorry. So good to know. So their they their team decided that, it's not worth it to just, add a new interface for this specific, thing because they think it's too niche.
Maxim Vovshin:So they prefer to to have this workaround, in some areas of different projects and not, quotes pollute, the interface. And it's reasonable. I think it, it shows how how how, much they care for for their, interface in Golang, which is pretty sweet, like, to not have too much garbage in it. So, yeah, that's it.
Shay Nehmad:The vulnerability itself, like, the CVE is super cool. It's not really about Go, so, you know, we won't go into it. It's in the Docker engine, run c, which is, by the way, in Go. So maybe maybe it is, worth, diving into it. But it's more like a Linux thing, file descriptors, sleeps, nxx.
Shay Nehmad:I don't know. It's very cool. I'll just say that.
Maxim Vovshin:Yeah. Something that, if you don't close the scriptor and you somehow, you know, exit the port or or, was it if you exit or not? I'm not sure. But, how somehow they're able to use that scriptor in the, like, aftermath of the the container.
Shay Nehmad:Yeah. You're like it's kinda weird, but you run your Docker, and you set your working directory to /procselself/file descriptor, and then Docker engine calls containerd, which calls Dockerd, which calls containerd, which calls run c. It's like a list of forks and then it sort of injects in the middle. I don't know. Something very, very cool.
Shay Nehmad:What I say cool, I think clearly very well written. There are like charts. There are like ASCII animations. And it's, I can just imagine, you know, the vulnerability researcher, like, you know, sitting down and connecting, red wire on their, like, white on their, like, board, like, some, like, trying to figure out what's going on, how does this connect to this, or, like, 15 references to the kernel and the GitHub runtime and open containers.
Maxim Vovshin:Hey. It runs wild.
Shay Nehmad:Yeah. It's pretty cool. So we're putting the link to the CV as well. It's a 2024-21626. It's pretty cool.
Shay Nehmad:If you're into security stuff, maybe you should, read it. So proposals, one that's likely accept, one that's, declined. You asked, by the way, how do I know, like, how do we know this is your first time on the show? How do we know, like, what's what to talk about this week? Yeah.
Shay Nehmad:So there Go team is pretty good about it, actually. They're, like they have a board of proposals, and they just list the items there and, you know, likely accept and active and then incoming, and it's very daunting. Likely except you have 1, active you have 23, which is a lot of proposals just to discuss and and think and and talk about. And then incoming, you have, like, 700. Damn.
Shay Nehmad:Yeah. A pretty a pretty interesting backlog to to chew through. But, you know, some of these are, I guess, less important than others. As you go down the list, maybe there there are some things that aren't super important.
Maxim Vovshin:Yeah. Yeah. I might be wrong here, but, it's it looks like there is, like, strong, gatekeeping, which features are going into the language, which is really good because it's not always the case, at least not with Python, for example, where, you have, like, 1,000 ways to do the same thing. Although I'm not not super familiar with Go, like, I don't call it in it, my daily job, but, it does just in my opinion, as as a, like, a simplicity to it to to the things that are like, how you do the things.
Shay Nehmad:I think there's a lot about a lot of effort has gone into, let's keep Go, you know, one do one thing. Actually, recently there's been a small departure with, for example, generics. So now Go lived without generics Yeah. For a really long time and then they added it. And also, iterators.
Shay Nehmad:Go Go didn't have iterators for, like, 14 years or whatever, and now just recently they added it. But overall, it's a it's a very very simple language. I saw comparison once, I don't remember where, of languages by number of keywords, like, how complicated the the language is, just by how many keywords and, like, syntax structures it has. Not even talking about, like, libraries and things like that. Gozer is pretty low on that list.
Shay Nehmad:It doesn't have a lot of keywords, doesn't have a lot of, stuff. I think this gatekeeping process is a big reason why. I proposed adding, Yamil to the standard library once. I opened the proposal
Maxim Vovshin:Yeah. I remember.
Shay Nehmad:And they rejected it.
Maxim Vovshin:Yeah. I remember sending it in Slack.
Shay Nehmad:That was very sad. Cool. So these are proposals and this is, this is how we find them. There's just a board. They're really good about the commendation over there at the Go team.
Shay Nehmad:I don't know if I should have told that because now, like, why would you listen to the show? You can just open the board.
Maxim Vovshin:I think, your your summaries are still still pretty good.
Shay Nehmad:Well, let we'll see about that because I'm gonna talk about a blog post that was sent in the, Go performance group over on Telegram. So I don't know what I'll do now that they arrested the CEO of Telegram and, you know, they're starting to collect user data. I don't know. Maybe I'll just stay in the groups. Anyways, there's a really good blog post on Red Hat, about the Go compiler register allocation, which I didn't understand.
Shay Nehmad:It's by Vladimir Makarov, just, like, published 2 days ago. So it's a very, very technical, blog post. It's not about anything new that's released. It's just Vlad going through Go compiler about a specific, section of it and explaining, like, how it works. Why should we listen to Vlad?
Shay Nehmad:He's the maintainer of the GCC register allocator, which is enough clout for me to be scared. Like, if someone is, oh, I maintain the code of compilers. It's like, yeah, I determine the laws of physics. It's like very far away from my, like, applicative work and it I don't know why but it feels like very hard and intimidating. Maybe it is.
Shay Nehmad:Maybe it is. But it just feels very, like, oh, you write compilers? What the fuck? Know what I mean?
Maxim Vovshin:Yeah. Yeah. It sounds like the the zone that you you are not, coming close to ever.
Shay Nehmad:Yeah. Exactly. It's like it's sort of a magic spells that are not allowed. But this blog post actually helped me a lot with, you know, not worrying so much about this, let's say imposter syndrome kind of thing. Vlad just goes through how the go register allocator works and to be honest I had to remember what a register allocator is.
Shay Nehmad:Even if you're not a compiler guru, like, you can figure this out. Basically, you have a different if you remember from like university or maybe you learned it, but somewhere you have memory hierarchy. Right? You have registers, a x b x blah blah blah which are very very very fast but very expensive so so you don't have a lot of them. And then you have like the caches, l one cache, l two cache.
Shay Nehmad:Then you have your memory, like your RAM of your computer, and then you have a hard disk or like an SSD, and then you can store stuff on the network. Like the farther down you go in the memory hierarchy, the more space you have and the cheaper it is, but the larger the delay is. Right? Like getting somewhere something from a network drive somewhere off of, I don't know, s 3 bucket takes a lot longer to get that data than asking the registers inside the CPU. Just because of physical limitations and how it's built.
Shay Nehmad:Yep. So there's a challenge here. The variables live in memory, but you need to decide which memory. You need to decide if you're using the when you compile Go code to assembler code, you need to decide if you're using registers or the caches or the main memory. Usually, the program doesn't decide on the cache, that's the OS's job, but it definitely does decide about register versus RAM.
Shay Nehmad:And if you don't do any sort of optimization, then when you do register allocation, then what's gonna happen? If you just do something super naive, like, if I need a variable, I'll put it on the on the register now, and if I am out of, registers, I'll put something in memory. You're gonna have a bad time because all your CPU is gonna do is switch data from the registers to the RAM and back, which is not very, optimized. And there's I learned way back in university that there's a magnitude of difference in the performance. Like, your code can be 10 times slower if the compiler is bad with register allocation.
Shay Nehmad:Apparently, it's the most important part of, of compiling, like and specifically in Go, it's the part that takes the most time as well when you compile.
Maxim Vovshin:So so specifically, the, like, the mechanism of choosing when to put something in a register and when not to. Right?
Shay Nehmad:Yeah. This specific part of, like, because there's an algorithm there to see what's the most efficient way to do it. You also don't want, to spill, like, data over a long time. It's it's very there are a lot of details in the in the blog post, but it's very interesting because it you read it from the point of view of someone who does it already in a different language, like, in a different compiler. In the other compilers, the register allocator is documented well enough so you can read and understand how it works without going into source code.
Shay Nehmad:Go Compiler's, register allocator doesn't have any documentation basically, so, Vlad goes to the code and basically writes the documentation for it. And it's very cool. There's a very specific algorithm and it's like, for every preorder blah blah blah, calculate the next use distance. And if it's likely to be used very soon, put it in a registry. And if it's not likely, put it slightly, and then shuffle, rematerialize the input values, blah blah blah blah blah.
Shay Nehmad:Like, very, very cool algorithm.
Maxim Vovshin:Statistical calculation of when something should be used?
Shay Nehmad:So I'm not sure if it's statistic because I think it it has to be, deterministic. I don't think there's any random stuff here, but I'm not a 100% sure. So I'm gonna be, like, maybe just read it. But there is like there's a lot of detail here that's very, very interesting, and once I read through it once, like, slowly, I was at least yeah. I had a lot of insight into how it this stuff works under the hood.
Shay Nehmad:Now I don't know if it's super relevant for my day to day work, but it's definitely nice to know that there were recent improvements for more sophisticated algorithms and, you know, some things I've that jump back from, university graph coloring and and things like that, which are pretty cool. Basically, the the you can try to jump directly into the algorithm, but the the summary of it is, from what I understand from the blog post, is that you have a graph of where every node is a variable. Edges are variables that live closely in the same time. And
Maxim Vovshin:So so if you have a loop, like a for loop, then all of the variables inside the for loop are probably close?
Shay Nehmad:Yeah. And and the problem is that if they're close, you don't want to put them on the same register.
Maxim Vovshin:Yeah. Because then you'll have to switch them all the time.
Shay Nehmad:A lot of shuffling. So what do what do you do? You do the thing you you color them. So you can't put red next to blue and you can't put blue next to yellow and you need to color the graph. And then your your number of colors is the number of registers you have.
Shay Nehmad:And then that's why it's important to note to which architecture you're compiling and which, like, hardware you're gonna run on. Right? Because in different hardwares, you have different amounts of registers. Pretty cool.
Maxim Vovshin:So I think this way we we are we're saying that this is, like, the hardcore software engineering because I really, like, have flashbacks from computer science courses and yeah. Sounds terrifying.
Shay Nehmad:A bit a bit intimidating. But one thing you can take away from this, I think, is knowing that this is very important. It can inform you how to write your code. For example, you might structure your functions to be smaller or reduce the so you reduce the number of live variables at any given time. Now I don't think you would specifically do it for performance but I just think if you read about it and you know about it and then in the back of your mind while you're programming you suddenly remember look at a function it has a ton of variables you suddenly remember oh the compiler is gonna have a hard time optimizing this.
Shay Nehmad:It might drive you towards writing better code. Again, not specifically about performance because if you're worried about performance, you should profile. You shouldn't, like, pre optimize the compiler optimizations. You're not smart enough to do that, and I know I am not.
Maxim Vovshin:Yeah. But it's, like, another another plus for, like, not writing huge functions.
Shay Nehmad:Yeah. Yeah. Just, write, less shitty code, which is good, I think. It's important, especially now that AI is writing all our code. Another reason to go through what the AI is, spitting out and and not taking it at face value is very important.
Maxim Vovshin:Yep.
Shay Nehmad:So a cool a cool blog post. Thanks, Vlad, for posting it. I was, like, very I was enamored by it, but I don't know how useful it is, but very, very, very cool.
Maxim Vovshin:Yeah. That was interesting. So the next one, this new feature under the blog post is called, what's in an alias name. It's basically talking about how do you manage, factoring your code base with types. So if you want to refactor like a large code base, you want to, for example, move some, definitions like functions, variables, types from one package to another, then you have the option to reference, for example, functions.
Maxim Vovshin:You can just write, like a wrapper function from, for example, package 2 that will reference package 1. And then each time someone wants to to access that function, it can access the function from both package 2 and package 1 because they both point to the same same function. But with types, it wasn't so easy because if you have, like, package 1, package 2, and you want to do and you have, like, a type called t, if you want to define the same type in package 2, then go will will treat it like a different type. Even though it's defined in exactly the same way, there is, like, no way to to point from type to another type. But now there will be because they're introducing something that is called a type alias.
Maxim Vovshin:So the way you define it is like type a equals some other type and the equals indicates, like, it's an alias. And why is it useful? Because now you won't be you won't, have to, like, do some tricks and change your functions to receive either, package, 1 t or package 2 t. We'll just be able to reference whatever package you want. And because it's an alias, you'll be good to go with both.
Maxim Vovshin:So it's like a it's a pretty nice, like, little addition to to help, I guess, mostly big projects to to migrate from packages.
Shay Nehmad:What what we had was, if I understand correctly, what we had was the, like, normal aliasing. So you could alias a thing and then go when was generics? 118 introduces generics where you have the type and you're in a square brackets, you have, like, constraints, but you couldn't alias a generic thing. So I think it makes sense.
Maxim Vovshin:Yeah. I think it's not only generics. It's like it was it was in general, like, you couldn't alias a type, and now you can alias a type and you can alias a type generic, which is I missed it. It's also, like, pretty sweet because you can alias a generic, but only, like, a subset of this generic thing, if I'm not mistaken. I might be.
Maxim Vovshin:But you can, like, pass exactly the the the list of types from this generic that you want to to alias. So it's like it's like a combination, I think.
Shay Nehmad:Yeah. That you can you can, alias the generic type with specific type constraints. You don't have to conform to what the original type was. Exactly. So this is not released yet.
Shay Nehmad:Right?
Maxim Vovshin:Nope. It's to be released in the next, Next Google version. Like, what it's going to be? Go 124, it's going to be?
Shay Nehmad:Looks like if you want to turn on the experiment, you can turn on the environment variable, alias type params, which is pretty cool.
Maxim Vovshin:So when I read about it, I didn't think, about this issue at all. Like, it's pretty it's a surprise to me that there is a need for this because we just talked about how they, how they are, wary of adding new features and, they don't want, the, like, the the code to be too messy. But apparently, this is an issue for big projects, so good to know.
Shay Nehmad:Yeah. I think it's important to remember. Usually, my Go projects, they tend to be pretty small just because even if they evolve really well and they become, like, huge back ends, microservices, whatever, you they they don't feel bloated. Like, they have just enough code to make everything work. And refactoring usually is pretty easy just because the tooling is pretty good and it's a statically typed language, and it makes sense that it's pretty good, easy.
Shay Nehmad:But the moment you're working with, like, huge projects that also have some of their parts open source, maybe have thousands of of of, consumers to your various sublibraries. I'm imagining projects like Kubernetes. Right? Yeah. Yeah.
Shay Nehmad:So my migrating or refactoring things in such a huge project without breaking all the dependence, like all the people that import the code, is really a technical challenge.
Maxim Vovshin:Yeah. So I'm imagining that if someone wants to move a type from one package to another in Kubernetes, then this will make his life a lot easier. That that I'm sure.
Shay Nehmad:Or they could just not use Kubernetes. But that's a different that's a different I'm wondering I can just imagine a line on the CV, you know, refactored a single variable type, a generic variable type, in Kubernetes and someone reading it and being, like, not tech not technical enough to understand how much work is it and be, like, refactor the single type, but this guy doesn't do anything. And then someone who actually knows this proposal know how much work went into it, work of of a 100 people for, like, many years. It's like, oh, that's that's amazing. Pretty cool.
Shay Nehmad:The blog post is by Robert, Griesemer. Thanks, Robert. Very cool. I think that's the blog post, for today. We wanted to talk about some more, but looks like we're we're gonna need to do it pretty fast, so time to go to the lighting round.
Shay Nehmad:Lightning round. First up on the lighting round, Victoria metrics released a series of blog posts about go concurrency and parallelism, things in the sync package. Very reminiscent of a talk I recently heard from, Jelvan Leifenfeld in the go, in the gophercon Israel. The latest one, go single flight, melts in your code, not in your d b, explains the go single flight, sync primitive, which is really, really cool. If you don't know what single flight is, you should, and it introduced a new thing.
Shay Nehmad:I didn't new thing for me, a cache stampede. Do you know what a cache stampede is?
Maxim Vovshin:No idea.
Shay Nehmad:So it's apparently when a lot of different threads talk to the cache and the cache has a lot of misses. So it asks the DB a few times for the same thing, which is exactly what the cache is supposed to prevent. Single flight, which means Mhmm. Just a single thread is gonna go through this piece of code, the rest are gonna wait for the result and get the same result, can prevent a cache stampede. Very very nice, wording for that one.
Shay Nehmad:What's next?
Maxim Vovshin:K. So next, should Go test run fail if it doesn't find any any tests? So let's say we were trying to run a test and we mistype put a typo in the name of the test. Right now, if I remember correctly, because you told me, the go run test just outputs a warning and it passes. Right?
Shay Nehmad:Exactly.
Maxim Vovshin:So do we want it to fail or not? If if you ask me, then I would just go go to how how Python works because that's mostly what I do today in Python, it fails, which is pretty I can it makes sense to me because it's probably a typo or something because I wouldn't just run those tests, like, those commands for, tests if I if I know there is no such test. So I don't know.
Shay Nehmad:You can worry about this from the other side as well. Imagine you're running in CI and you're, like, run all the tests that starts with Maxim, and then slowly, you change your name and the CI test doesn't do anything anymore. But
Maxim Vovshin:it's not fair. You just have, like, like, empty passes.
Shay Nehmad:So there's a live discussion on it on I'm I'll just say the other side just so this show feels controversial. It passes today and you can use it to just run benchmark test. So you can say like minus run empty string, no test, fits this, regex pattern, and then minus bench, everything. So it doesn't run any unit tests, but it runs all the, benchmark tests. So this behavior will start failing.
Shay Nehmad:So it's not very easy to decide, like, do you want to break this behavior for other people? What happens if you do minus run foo, a test that does exist, but also pass minus skip, should it fail? Like, you wanted it explicitly to not find anything. Kinda weird. Not obvious.
Shay Nehmad:Not obvious.
Maxim Vovshin:Because it breaks maybe, like, maybe there's, like, a point to stay the way it is, but I don't know.
Shay Nehmad:They're looking for a solution. The thread is open. The link is in the show notes. If you have any clever solution to help save the people, go post it now. And finally, on the lightning round, it's just a cool link to a cool Twitter thread by Felix I'm not gonna pronounce this correctly.
Shay Nehmad:I'm sorry. Felix Geissendolfer, who's a senior staff at Datadog, working on Go profiling. And they found a bug in the flame graph, go run time, go exit in the visualization. It's just a super technical, Twitter thread, which is the reason I like this sort of tech Twitter thing. Because people have to write a very technical blog post, but they have to do it in very short paragraphs which fit with my attention span.
Shay Nehmad:The TLDR is 0 values messed up with the with the, visualization. And, Felix goes through the fix and then the improved screenshot, and he says, thank you Datadog for allowing me to improve the Go ecosystem as part of my job. And I full fully fully agree. Thank you Datadog for letting Felix write these cool Twitter threads and enriching my life a little bit as well. Very cool stuff.
Shay Nehmad:Yeah. If only your service wasn't as expensive as, like, gold and oil and maybe maybe I could say I'm a happy user, but not right now. That's a show, Maxim.
Maxim Vovshin:Yeah. It was really quick. Like, we went over a bunch of stuff. I didn't understand half of it, but it was nice.
Shay Nehmad:The show is gonna come out soon, and you could listen to it.
Maxim Vovshin:Yes. Till then, I'll go over the stuff, and I've learned a bunch. Nice. Thank you.
Shay Nehmad:Thanks for coming on. Well, let's move to a quick ad break. Welcome to our ad break. This week, we're gonna do something a little different because Maxim is here, and, Maxim is from, Orca. We're gonna talk a little bit about Orca because we have open roles for Go developers.
Shay Nehmad:Maxim, how's Orca?
Maxim Vovshin:Orca's great. Like, I've learned, like, a ton since I since I came, which is almost 2 years already. And now we're, recruiting Go developers to the sensor team. Basically, the team that will work, on the sensor that, Orca will deploy to their customers, which is really exciting. We didn't have one till now.
Shay Nehmad:Yeah. We we had, like, partnerships and things like that with the other companies, and now we're doing on our own. Very, very cool. The role is in, Warsaw in Poland. So all our European listeners perk up your ears.
Shay Nehmad:It's Go, Python, Docker, Kubernetes, Serverless, Django, Postgres, Elasticsearch, Kibana, Spark, Airflow, Iceberg, NoSQL, Kafka, SQS, Redis, Linux, AWS, Azure, GCP, Oracle Cloud, and Ali Cloud. That's like the tech stack. And it's just, we have 4 different, roles, where Go is very much a benefit. There's back end developer, runtime security researcher, agent developer, and the DevOps engineer. And for one of these roles, like, 4 years in Go, is a specific requirement even.
Shay Nehmad:And if you have knowledge with eBPF technology, security stuff, programming skills in Rust or c plus plus or c, vulnerability research or or reverse engineering, good knowledge of, Docker and Kubernetes internals, all these sort of, things, deployment tools, monitoring tools, Prometheus, Terraform, Helm, blah blah blah. These are all, benefits. We're gonna post the links in the show notes to all these four roles. And even if this job these, job opportunities are the great fit for you, if you know someone who could be a good fit, let us know. We both work at Orca.
Shay Nehmad:I'm working a lot with this, new team. It's very exciting and they're doing stuff pretty right. I think it's a great time to join this this Yeah.
Maxim Vovshin:They have, like, strong start.
Shay Nehmad:And it started, like, sort of near my team, like, the CTO office and things like that. It was pretty cool thing seeing things, like, form up, and now that the team is is finally forming up into a official thing and we have so many open roles, it's just a really great opportunity to join Orca or ReWork and you get the chance to work with Maxim and I. I don't know if that's a benefit or not. So the the team is very recently formed. I asked Anton, the team lead, if he wanted, people from the podcast on the to hear about these roles, and he was, like, why for sure, like, people in Go communities are exactly the people we wanna find.
Shay Nehmad:So I think hand in your your CV, join the team. You need a Polish citizenship or or a work visa for Poland, but if that's relevant for you.
Maxim Vovshin:Yes. It has to be on-site. Yeah.
Shay Nehmad:Yeah. The job is hybrid. They're not in the office every day just like you and I, but you have to be a Polish citizen or have a visa or something like that. Not a super strong in a Right. In Polish law, prefer to learn about the Go compiler other than other countries, work, permits.
Shay Nehmad:But just, an FYI, this is not like a fully remote position. So let's do the normal ad break now that you heard about Orca hiring and you might be furiously handing your CV in now. This show is supported, not by Orca. They didn't sponsor us to talk about this by the way. This show is supported by Patreons who pay for the show every week.
Shay Nehmad:We really really well, Maxim doesn't because he doesn't get any money out of it. But I appreciate, all your, support, and I know John, does as well. We don't actually make money out of it. It's mostly us, paying our editor and hosting fees and just trying to maintain this very expensive hobby of doing a professional podcast. If you wanna reach us, you can find us at cupago.dev, or you can find links to our Slack channel, find our store where you can buy some cool swag, or email us at news atcapogo.dev that is news atcapogo.dev.
Shay Nehmad:Another way to help the show if you like it is to leave a review on, Spotify or Apple Podcasts or wherever you listen to your podcasts. This I think 2 weeks ago or maybe even last week, there was a new technical podcast on the doing the rotations. It's the programmatic engineer. He the the Gurgli Orsos. So he has a new podcast.
Shay Nehmad:He just shared a post about, oh my god, dude, like getting up the rankings meant so much for exposure for the podcast. So if you didn't rate it yet, that could be cool. We're not expecting to be, like, the number one podcast ever. I don't like it's obviously a very niche subject, but getting into more, Go developers' ears could be very cool. That's pretty much it.
Shay Nehmad:Thanks, Maxim, for joining. I hope you had a good time.
Maxim Vovshin:Thank you, John.
Shay Nehmad:And if you want to find Maxim, join Orca. He's not a big into follow me on, on Twitter or anything like that. But, so I guess your best, case is to hand in your CV.
Maxim Vovshin:Will do it.
Shay Nehmad:Alright. Thanks a lot for listening, everyone. Program exited.