BPF Performance Tools: Linux System and Application Observability (book)
http://www.brendangregg.com/blog/2019-07-15/bpf-performance-tools-book.html
https://redd.it/cdjppl
@r_linux
http://www.brendangregg.com/blog/2019-07-15/bpf-performance-tools-book.html
https://redd.it/cdjppl
@r_linux
reddit
r/linux - BPF Performance Tools: Linux System and Application Observability (book)
0 votes and 0 comments so far on Reddit
Kernel developers remove NPU DMA code used by the proprietary Nvidia driver
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=192f0f8e9db7efe4ac98d47f5fa4334e43c1204d
https://redd.it/cdj5gb
@r_linux
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=192f0f8e9db7efe4ac98d47f5fa4334e43c1204d
https://redd.it/cdj5gb
@r_linux
reddit
r/linux - Kernel developers remove NPU DMA code used by the proprietary Nvidia driver
0 votes and 5 comments so far on Reddit
Linux Environment toolkit
Hey guys,
I am currently working on a toolkit that will help in troubleshooting problems when building Linux environments
Currently I only have a simple shell noscript that simplifies curl, and makes it easier to work with lots of URL's
My goal is to make noscripts that simplify some of the commands and make them more automated
​
Constructive feedback is appreciated
​
Thanks
[https://github.com/Vastef/Matools](https://github.com/Vastef/Matools)
https://redd.it/cdekx4
@r_linux
Hey guys,
I am currently working on a toolkit that will help in troubleshooting problems when building Linux environments
Currently I only have a simple shell noscript that simplifies curl, and makes it easier to work with lots of URL's
My goal is to make noscripts that simplify some of the commands and make them more automated
​
Constructive feedback is appreciated
​
Thanks
[https://github.com/Vastef/Matools](https://github.com/Vastef/Matools)
https://redd.it/cdekx4
@r_linux
GitHub
Vastef/Matools
Contribute to Vastef/Matools development by creating an account on GitHub.
mplayer 4 videos simulatiously, tiled?
say I want to play multiple videos at once with mplayer, I can use the following:
​
find . -type f -name "*.mp4" | shuf | xargs -P4 -l mplayer -zoom -xy .5
and it works pretty well, my question is: can I do some sort of automatic tiling of these windows? I use xfce btw.
https://redd.it/cdnhja
@r_linux
say I want to play multiple videos at once with mplayer, I can use the following:
​
find . -type f -name "*.mp4" | shuf | xargs -P4 -l mplayer -zoom -xy .5
and it works pretty well, my question is: can I do some sort of automatic tiling of these windows? I use xfce btw.
https://redd.it/cdnhja
@r_linux
reddit
r/linux - mplayer 4 videos simulatiously, tiled?
0 votes and 1 comment so far on Reddit
I created a zsh plugin that automatically sends out a notification when a long running task has completed.
https://www.reddit.com/r/zsh/comments/cdnil5/i_created_a_plugin_that_automatically_sends_out_a/
https://redd.it/cdns3o
@r_linux
https://www.reddit.com/r/zsh/comments/cdnil5/i_created_a_plugin_that_automatically_sends_out_a/
https://redd.it/cdns3o
@r_linux
reddit
r/zsh - I created a plugin that automatically sends out a notification when a long running task has completed.
0 votes and 1 comment so far on Reddit
Asus Numeric Keybpad/Touchpad combo - Is anyone working on this and if so, how do I get involved?
https://www.reddit.com/r/archlinux/comments/ca9pcm/asus_numeric_keybpadtouchpad_combo_keypad_not/
https://redd.it/cdo9eh
@r_linux
https://www.reddit.com/r/archlinux/comments/ca9pcm/asus_numeric_keybpadtouchpad_combo_keypad_not/
https://redd.it/cdo9eh
@r_linux
reddit
r/archlinux - Asus Numeric Keybpad/Touchpad combo - Keypad not recognized?
6 votes and 1 comment so far on Reddit
i386 arch is our new enemy now
Distro maintainers say that it is yet another "burden" to maintain and they don't have enough devs to maintain it, but the question is why has this started happening in the last 2-3 years? Dual architecture is a thing since about early 2000s if I recall correctly and how come they were able to afford maintenance until now?
Also, distro maintainers not supporting is something I can understand as they have to deal with low-level hardware/metal centric C code, but why VMs and interpreters? I simply don't get why Java and Node.js would remove support for i386. After all, Java's very mantra is Write Once Run Anywhere (WORA) and so should be Node's as it is also a runtime with similar capabilities as Java.
Why is everyone suddenly obsessed with making i386 a villain? What great catastrophe will occur if some poor chaps sitting on older hardware are able to use it with some modern software? No compassion and humanity left these days? (open source was all about these two things when it began).
https://redd.it/cdr06w
@r_linux
Distro maintainers say that it is yet another "burden" to maintain and they don't have enough devs to maintain it, but the question is why has this started happening in the last 2-3 years? Dual architecture is a thing since about early 2000s if I recall correctly and how come they were able to afford maintenance until now?
Also, distro maintainers not supporting is something I can understand as they have to deal with low-level hardware/metal centric C code, but why VMs and interpreters? I simply don't get why Java and Node.js would remove support for i386. After all, Java's very mantra is Write Once Run Anywhere (WORA) and so should be Node's as it is also a runtime with similar capabilities as Java.
Why is everyone suddenly obsessed with making i386 a villain? What great catastrophe will occur if some poor chaps sitting on older hardware are able to use it with some modern software? No compassion and humanity left these days? (open source was all about these two things when it began).
https://redd.it/cdr06w
@r_linux
reddit
r/linux - i386 arch is our new enemy now
0 votes and 1 comment so far on Reddit
views on the Redhat acquisition by IBM
As it is all over the news that IBM is acquiring redhat after a multi million dollar deal and how that is going to change the way or is gonna revolutionize cloud computing. What are your thoughts on that?
https://redd.it/cdr7j9
@r_linux
As it is all over the news that IBM is acquiring redhat after a multi million dollar deal and how that is going to change the way or is gonna revolutionize cloud computing. What are your thoughts on that?
https://redd.it/cdr7j9
@r_linux
reddit
r/linux - views on the Redhat acquisition by IBM
0 votes and 7 comments so far on Reddit
Happy Birthday Slackware!
26 years ago, Patrick Volkerding released an SLS Linux fork which happens to be the oldest surviving Linux Distro today!
Slackware promises an experience like the old days of Linux. Thanks to it's nostalgic Installation experience & systemd free desktop.
This is the release letter Patrick published on the night of 16th July, 1993. Happy Birthday Slackware!
http://www.slackware.com/announce/1.0.php
https://redd.it/cdrv7a
@r_linux
26 years ago, Patrick Volkerding released an SLS Linux fork which happens to be the oldest surviving Linux Distro today!
Slackware promises an experience like the old days of Linux. Thanks to it's nostalgic Installation experience & systemd free desktop.
This is the release letter Patrick published on the night of 16th July, 1993. Happy Birthday Slackware!
http://www.slackware.com/announce/1.0.php
https://redd.it/cdrv7a
@r_linux
reddit
r/linux - Happy Birthday Slackware!
0 votes and 0 comments so far on Reddit
vlc-pause-click-plugin: Plugin for VLC that pauses/plays a video on mouse click
https://github.com/nurupo/vlc-pause-click-plugin
https://redd.it/cdrxtk
@r_linux
https://github.com/nurupo/vlc-pause-click-plugin
https://redd.it/cdrxtk
@r_linux
GitHub
GitHub - nurupo/vlc-pause-click-plugin: Plugin for VLC that pauses/plays video on mouse click
Plugin for VLC that pauses/plays video on mouse click - nurupo/vlc-pause-click-plugin
3700x and its amazing performance
Just upgraded my 2700 to a 3700x, and after doing benchmarks on windows and hackintosh (both are kinda wonky right now), and given the average geekbench scores with mere 2133mh memory I'm quite suprised with these scores on linux:
https://browser.geekbench.com/v4/cpu/13907458
Windows and hackintosh in comparison:
https://browser.geekbench.com/v4/cpu/13880105
https://browser.geekbench.com/v4/cpu/13880190
And my 1080 Ti compute scores (MacOS on 2700, Windows 3700x, and linux 3700x):
https://browser.geekbench.com/v4/compute/4286842
https://browser.geekbench.com/v4/compute/4286616
https://browser.geekbench.com/v4/compute/4296720
I do react-native development and decided to do a clean build and run the app in an emulator, maaan it was wicked fast.
Of course this was after I updated systemd with that wierd rdrand instruction bug workaround. I'm a pretty happy camper right now. I really do feel like I have a no compromise PC. In addition I have a Fractal Node 202 case, I can lug all this power in around in a backpack too!
Thank you AMD <3<3<3, even though the launch was a bit rocky!
https://redd.it/cdsluh
@r_linux
Just upgraded my 2700 to a 3700x, and after doing benchmarks on windows and hackintosh (both are kinda wonky right now), and given the average geekbench scores with mere 2133mh memory I'm quite suprised with these scores on linux:
https://browser.geekbench.com/v4/cpu/13907458
Windows and hackintosh in comparison:
https://browser.geekbench.com/v4/cpu/13880105
https://browser.geekbench.com/v4/cpu/13880190
And my 1080 Ti compute scores (MacOS on 2700, Windows 3700x, and linux 3700x):
https://browser.geekbench.com/v4/compute/4286842
https://browser.geekbench.com/v4/compute/4286616
https://browser.geekbench.com/v4/compute/4296720
I do react-native development and decided to do a clean build and run the app in an emulator, maaan it was wicked fast.
Of course this was after I updated systemd with that wierd rdrand instruction bug workaround. I'm a pretty happy camper right now. I really do feel like I have a no compromise PC. In addition I have a Fractal Node 202 case, I can lug all this power in around in a backpack too!
Thank you AMD <3<3<3, even though the launch was a bit rocky!
https://redd.it/cdsluh
@r_linux
Geekbench
Micro-Star International Co., Ltd. MS-7A40
- Geekbench Browser
- Geekbench Browser
Benchmark results for a Micro-Star International Co., Ltd. MS-7A40 with an AMD Ryzen 7 3700X processor.
KDE's Plasma 5 desktop is five! In five years, hundreds of people have contributed their time and knowledge to make the Plasma desktop what it is today. Happy birthday, Plasma!
https://peertube.mastodon.host/videos/watch/963e4e9f-6754-42b0-8b31-0495fb98f15b
https://redd.it/cdtvnr
@r_linux
https://peertube.mastodon.host/videos/watch/963e4e9f-6754-42b0-8b31-0495fb98f15b
https://redd.it/cdtvnr
@r_linux
peertube.mastodon.host
KDE's Plasma 5 desktop is 5!
Music: "Happy Birthday" by Alexander Kläser
Usage: CC Attribution 3.0
Available at https://archive.org/details/The_New_Birthday_Song_Contest-13010/Alexander_Klser_-_127_-_Happy_Birthday.mp3
Usage: CC Attribution 3.0
Available at https://archive.org/details/The_New_Birthday_Song_Contest-13010/Alexander_Klser_-_127_-_Happy_Birthday.mp3
Understanding Linux CPU Load
https://3keys.dev/2019/07/16/linux-cpu-load/
https://redd.it/cduexo
@r_linux
https://3keys.dev/2019/07/16/linux-cpu-load/
https://redd.it/cduexo
@r_linux
LVM on top of hardware RAID?
I have RAID5 disk and recently added new 2 disks to the server as RAID1 configured.
LVM is configured on the RAID5 disk and I would like to extend the logical volume to include RAID1, newly added one.
Are there any data corruption or lossing risk?
What happen if one of disk in RAID5 or RAID1 is broken?
https://redd.it/cduyjf
@r_linux
I have RAID5 disk and recently added new 2 disks to the server as RAID1 configured.
LVM is configured on the RAID5 disk and I would like to extend the logical volume to include RAID1, newly added one.
Are there any data corruption or lossing risk?
What happen if one of disk in RAID5 or RAID1 is broken?
https://redd.it/cduyjf
@r_linux
reddit
r/linux - LVM on top of hardware RAID?
0 votes and 4 comments so far on Reddit
i386 Salvation - Support information for the i386 architecture
https://github.com/prahladyeri/CuratedLists/blob/master/lists/x86_salvation.md
https://redd.it/cdveuq
@r_linux
https://github.com/prahladyeri/CuratedLists/blob/master/lists/x86_salvation.md
https://redd.it/cdveuq
@r_linux
GitHub
prahladyeri/CuratedLists
Curated Lists of various things. Contribute to prahladyeri/CuratedLists development by creating an account on GitHub.
Why we don't use Docker
When you don’t have clients, you don’t have to follow the herd. You cut your own path. And that is what we do. Every day.
Technology is a solution to a problem. And our problems are quite unique. As a low-latency proprietary trading firm, we need systems that strip-away complexity and for us, Docker is not it.
**Not having clients, liberates us**
While it seems like a small detail, not having clients makes a substantial difference to how we code, compared to most companies. We trade 24 hours a day with our own capital at our own risk, in a wide range of financial products on all major financial markets globally. This means, we are only accountable to ourselves.
And our code is written exclusively for our traders. The resulting auto trading application will run on our servers, monitored by our application engineers. This is quite unique and amazing position to be in: nobody influences our choices when it comes to technology! While many companies are being pushed to use various languages or frameworks due to dependencies - or trends... -, we are not.
Which leads me to the first reason why we don't use Docker: we like consistency, and if a technology is not a perfect fit for our stack, we won't use it.
The most quoted Docker benefit, is reproducibility. Yes, our developers need production-like servers to test their code. We simply buy some more and configure them the exact same way as production.
​
**A simple, fast growing environment**
Our industry moves fast. We’re talking nano seconds. Because we offer competitive, tradable prices to market participants in all market conditions, continuously, across the world, we as engineers have to move equally fast.
We’ll discuss a new feature with our traders on Monday, have a design meeting the next day and put the feature in production at the end of the same week. There is only one way to move that fast: things need to be as simple as possible - every tiny bit of complexity that doesn't pay-off is stripped away. We always favour the simple - almost naive-version of an algorithm to an overcomplicated and over-engineered one that supposedly performs better.
This ties in nicely with our view on consistency: as our entire stack is developed in-house and uses the same set of languages and libraries, our production environment is lean, homogeneous and simple enough to not being in need of containers.
You could argue that Docker is simple to use and while this is true, for our business the added complexity of a containerisation layer between your application and the OS, by far negates all of Docker's advantages.
​
**We like fast code, running on bare metal**
Unlike other industries, we like bare metal servers and we don't use virtual machines nor containers for our trading systems. All our production code is running on Linux and bare Intel CPUs.
The reason for this choice is simple: we need to be fast, and when you want to be the fastest, virtual machines are a no-go for almost all of our applications. Like other Market Makers, we use FPGAs for the most speed-sensitive part of our business, but we still need [low latency C++](https://www.youtube.com/watch?v=NH1Tta7purM) for the code operating our trading strategies.
Over and above the FPGAs, we are also trading with full software automated trading systems on many exchanges around the world and network latency on those applications is crucial. In this perpetual fight against jitter, Docker would be the antithesis to [ultra-low latency network cards](https://solarflare.com/), userspace drivers and optimized C++.
​
**Fat binaries as a container**
Even with handcrafted C++ applications, we might hit some dependency issues: dependencies between applications, between applications and libraries, with conflicting versions. The latter is often mentioned as the reason for Docker to use a single container per application.
This problem has been effectively solved a long time ago by Linux package managers and CMake, but we didn't even need to go as far: at Optiver, we like
When you don’t have clients, you don’t have to follow the herd. You cut your own path. And that is what we do. Every day.
Technology is a solution to a problem. And our problems are quite unique. As a low-latency proprietary trading firm, we need systems that strip-away complexity and for us, Docker is not it.
**Not having clients, liberates us**
While it seems like a small detail, not having clients makes a substantial difference to how we code, compared to most companies. We trade 24 hours a day with our own capital at our own risk, in a wide range of financial products on all major financial markets globally. This means, we are only accountable to ourselves.
And our code is written exclusively for our traders. The resulting auto trading application will run on our servers, monitored by our application engineers. This is quite unique and amazing position to be in: nobody influences our choices when it comes to technology! While many companies are being pushed to use various languages or frameworks due to dependencies - or trends... -, we are not.
Which leads me to the first reason why we don't use Docker: we like consistency, and if a technology is not a perfect fit for our stack, we won't use it.
The most quoted Docker benefit, is reproducibility. Yes, our developers need production-like servers to test their code. We simply buy some more and configure them the exact same way as production.
​
**A simple, fast growing environment**
Our industry moves fast. We’re talking nano seconds. Because we offer competitive, tradable prices to market participants in all market conditions, continuously, across the world, we as engineers have to move equally fast.
We’ll discuss a new feature with our traders on Monday, have a design meeting the next day and put the feature in production at the end of the same week. There is only one way to move that fast: things need to be as simple as possible - every tiny bit of complexity that doesn't pay-off is stripped away. We always favour the simple - almost naive-version of an algorithm to an overcomplicated and over-engineered one that supposedly performs better.
This ties in nicely with our view on consistency: as our entire stack is developed in-house and uses the same set of languages and libraries, our production environment is lean, homogeneous and simple enough to not being in need of containers.
You could argue that Docker is simple to use and while this is true, for our business the added complexity of a containerisation layer between your application and the OS, by far negates all of Docker's advantages.
​
**We like fast code, running on bare metal**
Unlike other industries, we like bare metal servers and we don't use virtual machines nor containers for our trading systems. All our production code is running on Linux and bare Intel CPUs.
The reason for this choice is simple: we need to be fast, and when you want to be the fastest, virtual machines are a no-go for almost all of our applications. Like other Market Makers, we use FPGAs for the most speed-sensitive part of our business, but we still need [low latency C++](https://www.youtube.com/watch?v=NH1Tta7purM) for the code operating our trading strategies.
Over and above the FPGAs, we are also trading with full software automated trading systems on many exchanges around the world and network latency on those applications is crucial. In this perpetual fight against jitter, Docker would be the antithesis to [ultra-low latency network cards](https://solarflare.com/), userspace drivers and optimized C++.
​
**Fat binaries as a container**
Even with handcrafted C++ applications, we might hit some dependency issues: dependencies between applications, between applications and libraries, with conflicting versions. The latter is often mentioned as the reason for Docker to use a single container per application.
This problem has been effectively solved a long time ago by Linux package managers and CMake, but we didn't even need to go as far: at Optiver, we like
YouTube
CppCon 2017: Carl Cook “When a Microsecond Is an Eternity: High Performance Trading Systems in C++”
http://CppCon.org
—
Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017
—
Automated trading involves submitting electronic orders rapidly when opportunities arise. But it’s harder than…
—
Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/CppCon/CppCon2017
—
Automated trading involves submitting electronic orders rapidly when opportunities arise. But it’s harder than…
fat binaries!
Fat binaries is an incredibly simple, powerful (and old) way to develop and deploy binaries. Recent languages such as Golang and Rust support it as a way to containerize their noscripts instead of using Docker. By "fat binaries", I refer to a binary that has been statically linked to all its dependencies. It doesn't seem great at first glance, I know.
There are a few reasons for this choice:
* *Simplicity of deployment*: the same way a Docker container runs your application with all its dependencies in an isolated environment, a fat binary doesn't have any external dependencies (except libstdc++ and libc)
* *Safety*: when you test a binary, you are certain that you are testing the exact same code as the one that will run in production. This might seem paranoid, but [you could imagine why](https://en.wikipedia.org/wiki/Knight_Capital_Group#2012_stock_trading_disruption).
* *Performance*: static linking will always be faster: at the very least you will skip a jump through the PLT, at most you can benefit from in-lining
The main downside with fat binaries is that the binary itself is bigger - still much smaller than a Docker image... - this is not a problem for us at all, as we deploy them on our servers. Our main automated trading system is thus a binary around 150MB, 14MB stripped. Again, if the size would ever be an issue we could strip them, but so far it isn’t, so we don't.
​
**Tech fetishes are real**
Some engineers out there, get so hung-up on the technology, they forget to consider the context. For us, that is working backwards. We don’t contain our options by starting with a set container tech ;-). We consider the problem first and build from there.
This way of thinking is real, it’s called having a tech fetish, and we see it surprisingly often. Especially during interviews. And as you can probably guess, it's not our cup of tea.
​
David Gross,
Engineer at [Optiver](https://www.optiver.com/eu/en?utm_source=reddit&utm_medium=blog-post&utm_campaign=optiver-eu-tech-blog-david)
https://redd.it/cdw9om
@r_linux
Fat binaries is an incredibly simple, powerful (and old) way to develop and deploy binaries. Recent languages such as Golang and Rust support it as a way to containerize their noscripts instead of using Docker. By "fat binaries", I refer to a binary that has been statically linked to all its dependencies. It doesn't seem great at first glance, I know.
There are a few reasons for this choice:
* *Simplicity of deployment*: the same way a Docker container runs your application with all its dependencies in an isolated environment, a fat binary doesn't have any external dependencies (except libstdc++ and libc)
* *Safety*: when you test a binary, you are certain that you are testing the exact same code as the one that will run in production. This might seem paranoid, but [you could imagine why](https://en.wikipedia.org/wiki/Knight_Capital_Group#2012_stock_trading_disruption).
* *Performance*: static linking will always be faster: at the very least you will skip a jump through the PLT, at most you can benefit from in-lining
The main downside with fat binaries is that the binary itself is bigger - still much smaller than a Docker image... - this is not a problem for us at all, as we deploy them on our servers. Our main automated trading system is thus a binary around 150MB, 14MB stripped. Again, if the size would ever be an issue we could strip them, but so far it isn’t, so we don't.
​
**Tech fetishes are real**
Some engineers out there, get so hung-up on the technology, they forget to consider the context. For us, that is working backwards. We don’t contain our options by starting with a set container tech ;-). We consider the problem first and build from there.
This way of thinking is real, it’s called having a tech fetish, and we see it surprisingly often. Especially during interviews. And as you can probably guess, it's not our cup of tea.
​
David Gross,
Engineer at [Optiver](https://www.optiver.com/eu/en?utm_source=reddit&utm_medium=blog-post&utm_campaign=optiver-eu-tech-blog-david)
https://redd.it/cdw9om
@r_linux
Proxmox VE 6.0 release with Ceph Nautilus and Corosync 3
https://www.proxmox.com/en/news/press-releases?view=article&id=133:proxmox-ve-6-0&catid=11:news
https://redd.it/cdvp0b
@r_linux
https://www.proxmox.com/en/news/press-releases?view=article&id=133:proxmox-ve-6-0&catid=11:news
https://redd.it/cdvp0b
@r_linux
Proxmox
Proxmox VE 6.0
Latest announcements for Proxmox.
Office 365 declared illegal in German schools due to privacy risks.
https://arstechnica.com/information-technology/2019/07/germany-threatens-to-break-up-with-microsoft-office-again/
https://redd.it/cdxb6w
@r_linux
https://arstechnica.com/information-technology/2019/07/germany-threatens-to-break-up-with-microsoft-office-again/
https://redd.it/cdxb6w
@r_linux
Ars Technica
Office 365 declared illegal in German schools due to privacy risks
Microsoft's future in Germany is in question again.
ZFS on Linux still has annoying issues with ARC size
https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSOnLinuxARCShrinkage
https://redd.it/cdwl7m
@r_linux
https://utcc.utoronto.ca/~cks/space/blog/linux/ZFSOnLinuxARCShrinkage
https://redd.it/cdwl7m
@r_linux
reddit
r/linux - ZFS on Linux still has annoying issues with ARC size
13 votes and 0 comments so far on Reddit