This is the CLI & GUI v0.17.1.3 'Oxygen Orion' point release. This release predominantly features bug fixes and performance improvements. Users, however, are recommended to upgrade, as it includes mitigations for the issue where transactions occasionally fail.
We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 38a04a7bd00733e9d943edba3004e44730c0848fe5e8a4fca4cb29c12d1e6b2f monero-android-armv7-v0.17.1.3.tar.bz2 0e94f58572646992ee21f01d291211ed3608e8a46ecb6612b378a2188390dba0 monero-android-armv8-v0.17.1.3.tar.bz2 ae1a1b61d7b4a06690cb22a3389bae5122c8581d47f3a02d303473498f405a1a monero-freebsd-x64-v0.17.1.3.tar.bz2 57d6f9c25bd1dbc9d6b39fcfb13260b21c5594b4334e8ed3b8922108730ee2f0 monero-linux-armv7-v0.17.1.3.tar.bz2 a0419993fbc6a5ca11bcd2e825acef13e429824f4d8c7ba4ec73ac446d2af2fb monero-linux-armv8-v0.17.1.3.tar.bz2 cf3fb693339caed43a935c890d71ecab5b89c430e778dc5ef0c3173c94e5bf64 monero-linux-x64-v0.17.1.3.tar.bz2 d107384ff7b1f77ee4db93940dbfda24d6045bf59c43169bc81a0118e3986bfa monero-linux-x86-v0.17.1.3.tar.bz2 79557c8bee30b229bda90bb9ee494097d639d60948fc2ad87a029359b56b1b48 monero-mac-x64-v0.17.1.3.tar.bz2 3eee0d0e896fb426ef92a141a95e36cb33ca7d1e1db3c1d4cb7383994af43a59 monero-win-x64-v0.17.1.3.zip c9e9dde61b33adccd7e794eba8ba29d820817213b40a2571282309d25e64e88a monero-win-x86-v0.17.1.3.zip # ## GUI 15ad80b2abb18ac2521398c4dad9b8bfea2e6fc535cf4ebcc60d99b8042d4fb2 monero-gui-install-win-x64-v0.17.1.3.exe 3bed02f9db5b7b2fe4115a636fecf0c6ec9079dd4e9284c8ce2c67d4996e2a4a monero-gui-linux-x64-v0.17.1.3.tar.bz2 23405534c7973a8d6908b76121b81894dc853039c942d7527d254dfde0bd2e8f monero-gui-mac-x64-v0.17.1.3.dmg 0a49ccccb561445f3d7ec0087ddc83a8b76f424fb7d5e0d725222f3639375ec4 monero-gui-win-x64-v0.17.1.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl+oVkkACgkQ8K9NRioL 35Lmpw//Xs09T4917sbnRH/DW/ovpRyjF9dyN1ViuWQW91pJb+E3i9TY+wU3q85k LyTihDB5pV+3nYgKPL9TlLfaytJIQG0vYHykPWHVmYmvoIs9BLarGwaU3bjO0rh9 ST5GDMdvxmQ5Y1LTwVfKkmBJw26DAs0xAvjBX44oRQjjuUdH6JdLPsqa5Kb++NCM b453m5s8bT3Cw6w0eJB1FQEyQ5BoDrwYcFzzsS1ag/C4Ylq0l6CZfEambfOQvdUi 7D5Rywfhiz2t7cfn7LaoXb74KDA/B1bL+R1/KhCuFqxRTOQzq9IxRywh4VptAAMU UR7jFHFijOMoyggIbkD48JmAjlBnqIyQJt4D5gbHe+tSaSoKdgoTGBAmIvaCZIng jfn9pTNzIJbTptsQhhyZqQQIH87D8BctZfX7pREjJmMNGwN2jFxXqUNqYTso20E6 YLtC1mkZBBZ294xHqT1mQpfznc6uVJhhoJpta0eKxkr1ahrGvWBDGZeVhLswnBcq 9dafAkR14rdK1naiCsygb6hMvBqBohVu/bWuhycJcv6XRvlP7UHkR6R8+s6U4Tk2 zaJERQF+cHQpEak5aEJIvDlb/mxteGyvPkPyL7UmADEQh3C4nREwkDSdnitYnF+e HxJZkshoC98+YCkWUP4+JYOOT158jKao3u0laEOxVGOrPz1Nc64= =Ys4h -----END PGP SIGNATURE-----
Upgrading (GUI)
Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear shortly with the new binary. In case you want to update manually, you ought to perform the following steps:
Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows: [1] On the second page of the wizard (first page is language selection) choose Open a wallet from file [2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux). Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.
Upgrading (CLI)
You ought to perform the following steps:
Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
Extract the new binaries to a new directory of your liking.
Copy over the wallet files from the old directory (i.e. the v0.15.x.x, v0.16.x.x, or v0.17.x.x directory).
Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.17.1.3, it will simply pick up where it left off.
In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here: https://www.getmonero.org/resources/user-guides/remote_node_gui.html
It's that time of year again, and we've got a new version of macOS on our hands! This year we've finally jumped off the 10.xx naming scheme and now going to 11! And with that, a lot has changed under the hood in macOS. As with previous years, we'll be going over what's changed in macOS and what you should be aware of as a macOS and Hackintosh enthusiast.
Has Nvidia Support finally arrived?
What has changed on the surface
A whole new iOS-like UI
macOS Snapshotting
Broken Kexts in Big Sur
What has changed under the hood
New Kernel cache system: KernelCollections!
New Kernel Requirements
Secure Boot Changes
No more symbols required
Broken Kexts in Big Sur
MSI Navi installer Bug Resolved
New AMD OS X Kernel Patches
Other notable Hackintosh issues
Several SMBIOS have been dropped
Dropped hardware
Extra long install process
X79 and X99 Boot issues
New RTC requirements
SATA Issues
Legacy GPU Patches currently unavailable
What’s new in the Hackintosh scene?
Dortania: a new organization has appeared
Dortania's Build Repo
True legacy macOS Support!
Intel Wireless: More native than ever!
Clover's revival? A frankenstein of a bootloader
Death of x86 and the future of Hackintoshing
Getting ready for macOS 11, Big Sur
Has Nvidia Support finally arrived?
Sadly every year I have to answer the obligatory question, no there is no new Nvidia support. Currently Nvidia's Kepler line is the only natively supported gen. However macOS 11 makes some interesting changes to the boot process, specifically moving GPU drivers into stage 2 of booting. Why this is relevant is due to Apple's initial reason for killing off Web Drivers: Secure boot. What I mean is that secure boot cannot work with Nvidia's Web Drivers due to how early Nvidia's drivers have to initialize at, and thus Apple refused to sign the binaries. With Big Sur, there could be 3rd party GPUs however the chances are still super slim but slightly higher than with 10.14 and 10.15.
What has changed on the surface
A whole new iOS-like UI
Love it or hate it, we've got a new UI more reminiscent of iOS 14 with hints of skeuomorphism(A somewhat subtle call back to previous mac UIs which have neat details in the icons) You can check out Apple's site to get a better idea:
A feature initially baked into APFS back in 2017 with the release of macOS 10.13, High Sierra, now macOS's main System volume has become both read-only and snapshotted. What this means is:
3rd parties have a much more difficult time modifying the system volume, allowing for greater security
OS updates can now be installed while you're using the OS, similar to how iOS handles updates
Time Machine can now more easily perform backups, without file inconsistencies with HFS Plus while you were using the machines
However there are a few things to note with this new enforcement of snapshotting:
OS snapshots are not calculated as used space, instead being labeled as purgeable space
Disabling macOS snapshots for the root volume with break software updates, and can corrupt data if one is applied
What has changed under the hood
Quite a few things actually! Both in good and bad ways unfortunately.
New Kernel Cache system: KernelCollections!
So for the past 15 years, macOS has been using the Prelinked Kernel as a form of Kernel and Kext caching. And with macOS Big Sur's new Read-only, snapshot based system volume, a new version of caching has be developed: KernelCollections! How this differs to previous OSes:
Kexts can no longer be hot-loaded, instead requiring a reboot to load with kmutil
With regards to Secure Boot, now all officially supported Macs will also now support some form of Secure Boot even if there's no T2 present. This is now done in 2 stages:
macOS will now always verify the ECID value to the secure boot manifest files(if present)
On T2's this ECID value is burned into the chip
On regular Macs, the first 8 bytes of your SystemUUID value
OS Snapshots are now verified on each boot to ensure no system volume modifications occurred
apfs.kext and AppleImage4.kext verify the integrity of these snapshots
While technically these security features are optional and can be disabled after installation, many features including OS updates will no longer work reliably once disabled. This is due to the heavy reliance of snapshots for OS updates, as mentioned above and so we highly encourage all users to ensure at minimum SecureBootModel is set to Default or higher.
Note: ApECID is not required for functionality, and can be skipped if so desired.
Note 2: OpenCore 0.6.3 or newer is required for Secure Boot in Big Sur.
No more symbols required
This point is the most important part, as this is what we use for kext injection in OpenCore. Currently Apple has left symbols in place seemingly for debugging purposes however this is a bit worrying as Apple could outright remove symbols in later versions of macOS. But for Big Sur's cycle, we'll be good on that end however we'll be keeping an eye on future releases of macOS.
New Kernel Requirements
With this update, the AvoidRuntimeDefrag Booter quirk in OpenCore broke. Because of this, the macOS kernel will fall flat when trying to boot. Reason for this is due to cpu_count_enabled_logical_processors requiring the MADT (APIC) table, and so OpenCore will now ensure this table is made accessible to the kernel. Users will however need a build of OpenCore 0.6.0 with commit bb12f5for newer to resolve this issue. Additionally, both Kernel Allocation requirements and Secure Boot have also broken with Big Sur due to the new caching system discussed above. Thankfully these have also been resolved in OpenCore 0.6.3. To check your OpenCore version, run the following in terminal: nvram 4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:opencore-version If you're not up-to-date and running OpenCore 0.6.3+, see here on how to upgrade OpenCore: Updating OpenCore, Kexts and macOS
Broken Kexts in Big Sur
Unfortunately with the aforementioned KernelCollections, some kexts have unfortunately broken or have been hindered in some way. The main kexts that currently have issues are anything relying on Lilu's userspace patching functionality:
Thankfully most important kexts rely on kernelspace patcher which is now in fact working again.
MSI Navi installer Bug Resolved
For those receiving boot failures in the installer due to having an MSI Navi GPU installed, macOS Big Sur has finally resolved this issue!
New AMD OS X Kernel Patches
For those running on AMD-Based CPUs, you'll want to also update your kernel patches as well since patches have been rewritten for macOS Big Sur support:
Big Sur dropped a few Ivy Bridge and Haswell based SMBIOS from macOS, so see below that yours wasn't dropped:
iMac14,3 and older
Note iMac14,4 is still supported
MacPro5,1 and older
MacMini6,x and older
MacBook7,1 and older
MacBookAir5,x and older
MacBookPro10,x and older
If your SMBIOS was supported in Catalina and isn't included above, you're good to go! We also have a more in-depth page here: Choosing the right SMBIOS For those wanting a simple translation for their Ivy and Haswell Machines:
iMac13,1 should transition over to using iMac14,4
iMac13,2 should transition over to using iMac15,1
iMac14,2 and iMac14,3 should transition over to using iMac15,1
Note: AMD CPUs users should transition over to MacPro7,1
iMac14,1 should transition over to iMac14,4
Dropped hardware
Currently only certain hardware has been officially dropped:
"Official" Consumer Ivy Bridge Support(U, H and S series)
These CPUs will still boot without much issue, but note that no Macs are supported with consumer Ivy Bridge in Big Sur.
Ivy Bridge-E CPUs are still supported thanks to being in MacPro6,1
Ivy Bridge iGPUs slated for removal
HD 4000 and HD 2500, however currently these drivers are still present in 11.0.1
Similar to Mojave and Nvidia's Tesla drivers, we expect Apple to forget about them and only remove them in the next major OS update next year
Note, while AirPortBrcm4360.kext has been removed in Big Sur, support for the 4360 series cards have been moved into AirPortBrcmNIC.kext, which still exists.
Due to the new snapshot-based OS, installation now takes some extra time with sealing. If you get stuck at Forcing CS_RUNTIME for entitlement, do not shutdown. This will corrupt your install and break the sealing process, so please be patient.
X79 and X99 Boot issues
With Big Sur, IOPCIFamily went through a decent rewriting causing many X79 and X99 boards to fail to boot as well as panic on IOPCIFamily. To resolve this issue, you'll need to disable the unused uncore bridge:
With macOS Big Sur, AppleRTC has become much more picky on making sure your OEM correctly mapped the RTC regions in your ACPI tables. This is mainly relevant on Intel's HEDT series boards, I documented how to patch said RTC regions in OpenCorePkg:
For those having boot issues on X99 and X299, this section is super important; you'll likely get stuck at PCI Configuration Begin. You can also find prebuilts here for those who do not wish to compile the file themselves:
For some reason, Apple removed the AppleIntelPchSeriesAHCI class from AppleAHCIPort.kext. Due to the outright removal of the class, trying to spoof to another ID (generally done by SATA-unsupported.kext) can fail for many and create instability for others. * A partial fix is to block Big Sur's AppleAHCIPort.kext and inject Catalina's version with any conflicting symbols being patched. You can find a sample kext here: Catalina's patched AppleAHCIPort.kext * This will work in both Catalina and Big Sur so you can remove SATA-unsupported if you want. However we recommend setting the MinKernel value to 20.0.0 to avoid any potential issues.
Legacy GPU Patches currently unavailable
Due to major changes in many frameworks around GPUs, those using ASentientBot's legacy GPU patches are currently out of luck. We either recommend users with these older GPUs stay on Catalina until further developments arise or buy an officially supported GPU
What’s new in the Hackintosh scene?
Dortania: a new organization has appeared
As many of you have probably noticed, a new organization focusing on documenting the hackintoshing process has appeared. Originally under my alias, Khronokernel, I started to transition my guides over to this new family as a way to concentrate the vast amount of information around Hackintoshes to both ease users and give a single trusted source for information. We work quite closely with the community and developers to ensure information's correct, up-to-date and of the best standards. While not perfect in every way, we hope to be the go-to resource for reliable Hackintosh information. And for the times our information is either outdated, missing context or generally needs improving, we have our bug tracker to allow the community to more easily bring attention to issues and speak directly with the authors:
Kexts here are built right after commit, and currently supports most of Acidanthera's kexts and some 3rd party devs as well. If you'd like to add support for more kexts, feel free to PR: Build Repo source
True legacy macOS Support!
As of OpenCore's latest versioning, 0.6.2, you can now boot every version of x86-based builds of OS X/macOS! A huge achievement on @Goldfish64's part, we now support every major version of kernel cache both 32 and 64-bit wise. This means machines like Yonah and newer should work great with OpenCore and you can even relive the old days of OS X like OS X 10.4! And Dortania guides have been updated accordingly to accommodate for builds of those eras, we hope you get as much enjoyment going back as we did working on this project!
Intel Wireless: More native than ever!
Another amazing step forward in the Hackintosh community, near-native Intel Wifi support! Thanks to the endless work on many contributors of the OpenIntelWireless project, we can now use Apple's built-in IO80211 framework to have near identical support to those of Broadcom wireless cards including features like network access in recovery and control center support. For more info on the developments, please see the itlwm project on GitHub: itlwm
Note, native support requires the AirportItlwm.kext and SecureBootModel enabled on OpenCore. Alternatively you can force IO80211Family.kext to ensure AirportItlwm works correctly.
Airdrop support currently is also not implemented, however is actively being worked on.
Clover's revival? A frankestien of a bootloader
As many in the community have seen, a new bootloader popped up back in April of 2019 called OpenCore. This bootloader was made by the same people behind projects such as Lilu, WhateverGreen, AppleALC and many other extremely important utilities for both the Mac and Hackintosh community. OpenCore's design had been properly thought out with security auditing and proper road mapping laid down, it was clear that this was to be the next stage of hackintoshing for the years we have left with x86. And now lets bring this back to the old crowd favorite, Clover. Clover has been having a rough time of recent both with the community and stability wise, with many devs jumping ship to OpenCore and Clover's stability breaking more and more with C++ rewrites, it was clear Clover was on its last legs. Interestingly enough, the community didn't want Clover to die, similarly to how Chameleon lived on through Enoch. And thus, we now have the Clover OpenCore integration project(Now merged into Master with r5123+). The goal is to combine OpenCore into Clover allowing the project to live a bit longer, as Clover's current state can no longer boot macOS Big Sur or older versions of OS X such as 10.6. As of writing, this project seems to be a bit confusing as there seems to be little reason to actually support Clover. Many of Clover's properties have feature-parity in OpenCore and trying to combine both C++ and C ruins many of the features and benefits either languages provide. The main feature OpenCore does not support is macOS-only ACPI injection, however the reasoning is covered here: Does OpenCore always inject SMBIOS and ACPI data into other OSes?
Death of x86 and the future of Hackintoshing
With macOS Big Sur, a big turning point is about to happen with Apple and their Macs. As we know it, Apple will be shifting to in-house designed Apple Silicon Macs(Really just ARM) and thus x86 machines will slowly be phased out of their lineup within 2 years. What does this mean for both x86 based Macs and Hackintoshing in general? Well we can expect about 5 years of proper OS support for the iMac20,x series which released earlier this year with an extra 2 years of security updates. After this, Apple will most likely stop shipping x86 builds of macOS and hackintoshing as we know it will have passed away. For those still in denial and hope something like ARM Hackintoshes will arrive, please consider the following:
We have yet to see a true iPhone "Hackintosh" and thus the likely hood of an ARM Hackintosh is unlikely as well
There have been successful attempts to get the iOS kernel running in virtual machines, however much work is still to be done
Apple's use of "Apple Silicon" hints that ARM is not actually what future Macs will be running, instead we'll see highly customized chips based off ARM
For example, Apple will be heavily relying on hardware features such as WX, kernel memory protection, Pointer Auth, etc for security and thus both macOS and Applications will be dependant on it. This means hackintoshing on bare-metal(without a VM) will become extremely difficult without copious amounts of work
Also keep in mind Apple Silicon will no longer be UEFI-based like Intel Macs currently are, meaning a huge amount of work would also be required on this end as well
So while we may be heart broken the journey is coming to a stop in the somewhat near future, hackintoshing will still be a time piece in Apple's history. So enjoy it now while we still can, and we here at Dortania will still continue supporting the community with our guides till the very end!
Getting ready for macOS 11, Big Sur
This will be your short run down if you skipped the above:
Lilu's userspace patcher is broken
Due to this many kexts will break:
DiskArbitrationFixup
MacProMemoryNotificationDisabler
SidecarEnabler
SystemProfilerMemoryFixup
NoTouchID
WhateverGreen's DRM and -cdfon patches
Many Ivy Bridge and Haswell SMBIOS were dropped
See above for what SMBIOS to choose
Ivy Bridge iGPUs are to be dropped
Currently in 11.0.1, these drivers are still present
For the last 2, see here on how to update: Updating OpenCore, Kexts and macOS In regards to downloading Big Sur, currently gibMacOS in macOS or Apple's own software updater are the most reliable methods for grabbing the installer. Windows and Linux support is still unknown so please stand by as we continue to look into this situation, macrecovery.py may be more reliable if you require the recovery package. And as with every year, the first few weeks to months of a new OS release are painful in the community. We highly advise users to stay away from Big Sur for first time installers. The reason is that we cannot determine whether issues are Apple related or with your specific machine, so it's best to install and debug a machine on a known working OS before testing out the new and shiny. For more in-depth troubleshooting with Big Sur, see here: OpenCore and macOS 11: Big Sur
Back around 2008, I built a machine with a 3ware RAID controller, and set up 15 1TB drives in RAID 6. At some point in maybe 2010, I had 3 (or maybe only 2) drives fail due to (most likely) overheating. I was unable to rebuild the array at the time, even with swapping out the failed drive/s. I don't remember the details. More than a decade later, I still have all 15 drives, in a box, labeled with their order, and the original 3ware controller, and a desiccant pack. I have no idea if the drives still work, but I am finally ready to try to recover the data from them, assuming they still work. After a bit of duckduckgo-ing, it appears that I really only have 2 options - use recovery software or use a recovery service where I ship out my drives. The data on these drives, while nice to have, is not worth me sending them to a 3rd party. I am, however, willing to spend a little money on the recovery software if I need to. Based on my searching, it appears that there are 3 viable options: * https://www.diskinternals.com/raid-recovery/ * https://www.stellarinfo.com/article/raid6-data-recovery.php * http://www.freeraidrecovery.com/ The Diskinternals solution looks like it may be the easiest, but I'm not sure what to expect when I actually try to use it. The Stellar one looks good as well - it has instructions with screenshots and I was able to find a video of someone actually using it. But it needs some technical parameters that I have no idea how to retrieve - maybe I could hook up the old controller and read them by accessing the controller from the bios? I will try that once I'm ready to get my hands dirty. The ReclaiMe one appears to be easy and free, claiming that it will automatically determine the parameters that Stellar expects you to supply. Seems too good to be true, especially as a free product. Their site and their claims make me not trust them... So to get started on this project, the very first thing I want to do is take some kind of image of each of the 15 drives. Do any of you have recommendations for the best way to do this? The first step in Diskinternals instructions (which are on this separate page for some reason - https://www.diskinternals.com/raid-recovery/raid-6-data-recovery/) list creating a "binary image" of the disk/s. Once I do this, then do I need to mount it somehow? Do I need some separate program to do that in Windows? I know that I can (and will) look this up, but taking an image of known corrupted drives for the purposes of RAID data recovery with specialized recovery software seems to be a pretty special case, and I want to make sure that the image I take is what will be needed to attempt the recovery. I don't know how many times I'll be able to read from these old drives. I did a little searching before posting this about disk imaging/cloning - it seems like I need an image, not a clone. Clonezilla looks like the best option (and I've used it before). I've heard good things about Acronis, but their new pricing model turns me off. Most of the alternatives to Clonezilla (Acronis, Paragon, Macrium) don't have technical-enough language to earn my trust. I also took a look at isobuster, because that's a program I already have, but it looks like its ability to take raw images does not include HDDs. A quick search of datahoarder using the search term "raid 6" didn't bring up any posts that had addressed this scenario - most were about swapping/rebuilding. Any help, guidance, insight, etc. is appreciated. Thanks!
We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 75b198869a3a117b13b9a77b700afe5cee54fd86244e56cb59151d545adbbdfd monero-android-armv7-v0.16.0.3.tar.bz2 b48918a167b0961cdca524fad5117247239d7e21a047dac4fc863253510ccea1 monero-android-armv8-v0.16.0.3.tar.bz2 727a1b23fbf517bf2f1878f582b3f5ae5c35681fcd37bb2560f2e8ea204196f3 monero-freebsd-x64-v0.16.0.3.tar.bz2 6df98716bb251257c3aab3cf1ab2a0e5b958ecf25dcf2e058498783a20a84988 monero-linux-armv7-v0.16.0.3.tar.bz2 6849446764e2a8528d172246c6b385495ac60fffc8d73b44b05b796d5724a926 monero-linux-armv8-v0.16.0.3.tar.bz2 cb67ad0bec9a342b0f0be3f1fdb4a2c8d57a914be25fc62ad432494779448cc3 monero-linux-x64-v0.16.0.3.tar.bz2 49aa85bb59336db2de357800bc796e9b7d94224d9c3ebbcd205a8eb2f49c3f79 monero-linux-x86-v0.16.0.3.tar.bz2 16a5b7d8dcdaff7d760c14e8563dd9220b2e0499c6d0d88b3e6493601f24660d monero-mac-x64-v0.16.0.3.tar.bz2 5d52712827d29440d53d521852c6af179872c5719d05fa8551503d124dec1f48 monero-win-x64-v0.16.0.3.zip ff094c5191b0253a557be5d6683fd99e1146bf4bcb99dc8824bd9a64f9293104 monero-win-x86-v0.16.0.3.zip # ## GUI 50fe1d2dae31deb1ee542a5c2165fc6d6c04b9a13bcafde8a75f23f23671d484 monero-gui-install-win-x64-v0.16.0.3.exe 20c03ddb1c82e1bcb73339ef22f409e5850a54042005c6e97e42400f56ab2505 monero-gui-linux-x64-v0.16.0.3.tar.bz2 574a84148ee6af7119fda6b9e2859e8e9028fe8a8eec4dfdd196aeade47e9c90 monero-gui-mac-x64-v0.16.0.3.dmg 371cb4de2c9ccb5ed99b2622068b6aeea5bdfc7b9805340ea7eb92e7c17f2478 monero-gui-win-x64-v0.16.0.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl81bL8ACgkQ8K9NRioL 35J+UA//bgY6Mhikh8Cji8i2bmGXEmGvvWMAHJiAtAG2lgW3BT9BHAFMfEpUP5rk svFNsUY/Uurtzxwc/myTPWLzvXVMHzaWJ/EMKV9/C3xrDzQxRnl/+HRS38aT/D+N gaDjchCfk05NHRIOWkO3+2Erpn3gYZ/VVacMo3KnXnQuMXvAkmT5vB7/3BoosOU+ B1Jg5vPZFCXyZmPiMQ/852Gxl5FWi0+zDptW0jrywaS471L8/ZnIzwfdLKgMO49p Fek1WUUy9emnnv66oITYOclOKoC8IjeL4E1UHSdTnmysYK0If0thq5w7wIkElDaV avtDlwqp+vtiwm2svXZ08rqakmvPw+uqlYKDSlH5lY9g0STl8v4F3/aIvvKs0bLr My2F6q9QeUnCZWgtkUKsBy3WhqJsJ7hhyYd+y+sBFIQH3UVNv5k8XqMIXKsrVgmn lRSolLmb1pivCEohIRXl4SgY9yzRnJT1OYHwgsNmEC5T9f019QjVPsDlGNwjqgqB S+Theb+pQzjOhqBziBkRUJqJbQTezHoMIq0xTn9j4VsvRObYNtkuuBQJv1wPRW72 SPJ53BLS3WkeKycbJw3TO9r4BQDPoKetYTE6JctRaG3pSG9VC4pcs2vrXRWmLhVX QUb0V9Kwl9unD5lnN17dXbaU3x9Dc2pF62ZAExgNYfuCV/pTJmc= =bbBm -----END PGP SIGNATURE-----
Upgrading (GUI)
Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary. In case you want to update manually, you ought to perform the following steps:
Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows: [1] On the second page of the wizard (first page is language selection) choose Open a wallet from file [2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux). Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.
Upgrading (CLI)
You ought to perform the following steps:
Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
Extract the new binaries to a new directory of your liking.
Copy over the wallet files from the old directory (i.e. the v0.15.x.x or v0.16.0.x directory).
Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.16.0.3, it will simply pick up where it left off.
Release notes (GUI)
macOS app is now notarized by Apple
CMake improvments
Add support for IPv6 remote nodes
Add command history to Logs page
Add "Donate to Monero" button
Indicate probability of finding a block on Mining page
In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here: https://www.getmonero.org/resources/user-guides/remote_node_gui.html
We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 8e3ce10997ab50eec2ec3959846d61b1eb3cb61b583c9f0f9f5cc06f63aaed14 monero-android-armv7-v0.16.0.1.tar.bz2 d9e885b3b896219580195fa4c9a462eeaf7e9f7a6c8fdfae209815682ab9ed8a monero-android-armv8-v0.16.0.1.tar.bz2 4f4a2c761b3255027697cd57455f5e8393d036f225f64f0e2eff73b82b393b50 monero-freebsd-x64-v0.16.0.1.tar.bz2 962f30701ef63a133a62ada24066a49a2211cd171111828e11f7028217a492ad monero-linux-armv7-v0.16.0.1.tar.bz2 83c21fe8bb5943c4a4c77af90980a9c3956eea96426b4dea89fe85792cc1f032 monero-linux-armv8-v0.16.0.1.tar.bz2 4615b9326b9f57565193f5bfe092c05f7609afdc37c76def81ee7d324cb07f35 monero-linux-x64-v0.16.0.1.tar.bz2 3e4524694a56404887f8d7fedc49d5e148cbf15498d3ee18e5df6338a86a4f68 monero-linux-x86-v0.16.0.1.tar.bz2 d226c704042ff4892a7a96bb508b80590a40173683101db6ad3a3a9e20604334 monero-mac-x64-v0.16.0.1.tar.bz2 851b57ec0783d191f0942232e431aedfbc2071125b1bd26af9356c7b357ab431 monero-win-x64-v0.16.0.1.zip e944d15b98fcf01e54badb9e2d22bae4cd8a28eda72c3504a8156ee30aac6b0f monero-win-x86-v0.16.0.1.zip # ## GUI d35c05856e669f1172207cbe742d90e6df56e477249b54b2691bfd5c5a1ca047 monero-gui-install-win-x64-v0.16.0.2.exe 9ff8c91268f8eb027bd26dcf53fda5e16cb482815a6d5b87921d96631a79f33f monero-gui-linux-x64-v0.16.0.2.tar.bz2 142a1e8e67d80ce2386057e69475aa97c58ced30f0ece3f4b9f5ea5b62e48419 monero-gui-mac-x64-v0.16.0.2.tar.bz2 6e0efb25d1f5c45a4527c66ad6f6c623c08590d7c21de5a611d8c2ff0e3fbb55 monero-gui-win-x64-v0.16.0.2.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl8JaBMACgkQ8K9NRioL 35IKbhAAnmfm/daG2K+llRBYmNkQczmVbivbu9JLDNnbYvGuTVH94PSFC/6K7nnE 8EkiLeIVtBBlyr4rK288xSJQt+BMVM93LtzHfA9bZUbZkjj2le+KN8BHcmgEImA8 Qm2OPgr7yrxvb3aD5nQUDoaeQSmnkLCpN2PLbNGymOH0+IVl1ZYjY7pUSsJZQGvC ErLxZSN5TWvX42LcpyBD3V7//GBOQ/gGpfB9fB0Q5LgXOCLlN2OuQJcYY5KV3H+X BPp9IKKJ0OUGGm0j7mi8OvHxTO4cbHjU8NdbtXy8OnPkXh24MEwACaG1HhiNc2xl LhzMSoMOnVbRkLLtIyfDC3+PqO/wSxVamphKplEncBXN28AakyFFYOWPlTudacyi SvudHJkRKdF0LVIjXOzxBoRBGUoJyyMssr1Xh67JA+E0fzY3Xm9zPPp7+Hp0Pe4H ZwT7WJAKoA6GqNpw7P6qg8vAImQQqoyMg51P9Gd+OGEo4DiA+Sn5r2YQcKY5PWix NlBTKq5JlVfRjE1v/8lUzbe+Hq10mbuxIqZaJ4HnWecifYDd0zmfQP1jt7xsTCK3 nxHb9Tl1jVdIuu2eCqGTG+8O9ofjVDz3+diz6SnpaSUjuws218QCZGPyYxe91Tz8 dCrf41FMHYhO+Lh/KHFt4yf4LKc0c048BoVUg6O0OhNIDTsvd/k= =akVA -----END PGP SIGNATURE-----
Upgrading
Note that, once the DNS records are upgraded, you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary. In case you want to update manually, you ought to perform the following steps:
Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows: [1] On the second page of the wizard (first page is language selection) choose Open a wallet from file [2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux). Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.
Release notes
Point release:
Fix bug that inhibited Ledger Monero users from properly sending transactions containing multiple inputs.
CMake improvements
Minor security relevant fixes
Various bug fixes
Major release:
Simple mode: node selction algorithm improved
UX: display estimated transaction fee
UX: add update dialog with download and verify functionality
UX: implement autosave feature
UI: redesign advanced options on transfer page
UI: improve daemon sync progress bar
UI: new language sidebar
UI: new processing splash design
UI: redesign settings page
Trezor: support new passphrase entry mechanism
Wizard: add support for seed offset
Dandelion++
Major Bulletproofs verification performance optimizations
In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here: https://www.getmonero.org/resources/user-guides/remote_node_gui.html
Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell
Introduction to the manual This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements. This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows. If you follow this manual you will be able to do the following items by yourself: ● Installing the CodeReady Containers ● Updating OpenShift ● Configuring a CodeReady Container ● Configuring the DNS ● Accessing the OpenShift cluster ● Deploying the Mediawiki application What is the OpenShift Container platform? Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment. The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process. What knowledge is required or recommended to proceed with the installation? To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows: ● Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands. ● Microsoft:https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands ● MacOS Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/ ● Linux https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lessonhttps://www.guru99.com/linux-commands-cheat-sheet.html http://cc.iiti.ac.in/docs/linuxcommands.pdf Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware: Hardware requirements Code Ready Containers requires the following system resources: ● 4 virtual CPU’s ● 9 GB of free random-access memory ● 35 GB of storage space ● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios Software requirements The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements: Microsoft Windows On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported. macOS On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer. Linux On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases. When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal. Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.
Required additional software packages for Linux
The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution: Table 1.1 Package installation commands by distribution
To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now” After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later. The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip. If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc deletecommand. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup
Setting up CodeReady Containers
Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster. You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc deletecommand and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1
Configuration
It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.
Configuring the CodeReady Containers
To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are: ● get, this command allows you to see the values of a configurable property ● set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being: ○ Shell options ○ Shell attributes ○ Positional parameters ● view, this command starts the configuration in read-only mode. These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help. Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration. There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help
Configuring the Virtual Machine
You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine. To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value. To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs C:\Users\[username]\$PATH>crc config set memory >
Configuring the DNS
Window / General DNS setup
There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are: ● crc.testing, this is the domain for the core OpenShift services. ● apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster. Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.
macOS DNS setup
MacOS expects the following DNS configuration for the CodeReady Containers ● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting. ● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.
Linux DNS setup
CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf. To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following: ● Server=/crc. Testing/192.168.130.11 ● Server=/apps-crc. Testing/192.168.130.11
Accessing the Openshift Cluster
Accessing the Openshift web console
To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc). First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command. It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps. Step 1. Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env
Step 2. Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression
This means we have to execute* the command that the output gives us, in this case that is:
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary* To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc
Step 3 Now you need to login as a developer user, this can be done using the following command: $oc login -u developerhttps://api.crc.testing:6443 Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
Step 4 The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co
Demonstration
Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform. We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application. As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled. Lastly, we will show the user how to use user management within the platform.
Creating a project
To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before. When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left. Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project. https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2 When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container. https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210
There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.
In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines. One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time. https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1 In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up. https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94
Network
Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS. One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration. The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are: - Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority. - Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project. There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route. https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da You can add items that you use a lot to the navigation https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232 For this example, we will add Routes to navigation. https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”. https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75 Fill in the name, select the service and the target port from the drop-down menu and click on Create. https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6 As you can see, we’ve successfully added the new route to our application. https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1 Storage OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure. Within this storage there are a few configuration options:
Reclaim
Recycle
Delete
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet. To manually reclaim the PV, you need to follow the following steps: Step 1: Delete the PV, this can be done by executing the following command
$oc delete
Step 2: Now you need to clean up the data on the associated storage asset Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition. It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps: Step 1: Get a list of the PVs in your cluster
$oc get pv
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age. Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
In this example the reclaim policy will be changed to Delete. Step 3: After this you can check the PV to verify the change by executing this command again:
According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access. There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords. First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration. for more information on what mapping methods are and how they function: https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html With the default mapping method, the steps will be as following
$oc create user
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity :
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :
For example, the following command maps the identity to the user:
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users. Below is an example of the admin clusterrole command:
If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things: ● Installing the CodeReady Containers ● Updating OpenShift ● Configuring a CodeReady Container ● Configuring the DNS ● Accessing the OpenShift cluster ● Deploying an application ● Creating new users With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.
Troubleshooting
Nameserver There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1
Hyper-V admin Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
Click System Tools > Local Users and Groups > Groups. The list of groups opens.
Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
Click Add. The Select Users or Groups window opens.
In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
Click Apply, and then click OK.
Terms and definitions
These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions. ● Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes. ● Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations. ● Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. ● CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes. ● CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
./play.it is a free/libre software that builds native packages for several Linux distributions from DRM-free installers for a collection of commercial games. These packages can then be installed using the standard distribution-provided tools (APT, pacman, emerge, etc.). A more complete description of ./play.it has already been posted in linux_gaming a couple months ago: ./play.it, an easy way to install commercial games on GNU/Linux It's already been one year since version 2.11 was released, in January 2019. We will only briefly review the changelog of version 2.12 and focus on the different points of ./play.it that kept us busy during all this time, and of which coding was only a small part.
What’s new with 2.12?
Though not the focus of this article, it would be a pity not to present all the added features of this brand new version. ;) Compared to the usual updates, 2.12 is a major one, especially since for two years, we slowed down the addition of new features. Some patches took dust since the end of 2018 before finally be integrated in this update! The list of changes for this 2.12 release can be found on our forge. Here is a full copy for convenience:
New options:
--output-dir: Set the output directory for generated packages
--overwrite: Replace packages if they already exist
--icons: Allow including icons only if dependencies are present
Wrapper changes:
Drop $XDG_RUNTIME_DIR from the candidates for temporary directories
Prevent scan of unneeded directories
Drop script identification by MD5 hash
Archive-related changes:
Only extract needed files when using unzip
Allow to use renamed installers
Add support for LHA archives extraction
Engines-related changes:
New engine: ResidualVM
New engine: System-provided Mono runtime
DOSBox: Use $PLAYIT_DOSBOX_BINARY in launchers if defined
Packages-related changes:
Add ability to set variables for package-specific postinst and prerm scripts
Arch Linux: Improve consistence of 32-bit packages naming
New helper functions:
version_target_is_older_than: Check if the game script target version is older than a given one
toupper: Convert files name to upper case
New generic dependency keywords:
libgdk_pixbuf-2.0.so.0
libglib-2.0.so.0 / libgobject-2.0.so.0
libmbedtls.so.12
libpng16.so.16
libopenal.so.1 (alias for openal)
libSDL2-2.0.so.0 (alias for sdl2)
libturbojpeg.so.0
libuv.so.1
libvorbisfile.so.3 (alias for vorbis)
libz.so.1
Codebase clean-up and improvements:
Massive rework of all message-related functions
Drop hardcoded paths for icons and .desktop launchers
Use system-specific default installation prefix for generated packages
Forcefully set errexit setting on library initialization
Use dirname/basename instead of built-in shell patterns
Development migration
History
As many free/libre projects, ./play.it development started on some random sector of a creaking hard drive, and unsurprisingly, a whole part of its history (everything predating version 1.13.15 released on Mars 30th, 2016) disappeared into the limbs because some unwise operation destroyed the only copy of the repository… Lesson learned, what's not shared don't stay long, and so was born the first public Git repository of the project. The easing of collaborative work was only accidentally achieved by this quest for eternity, but wasn't the original motivation for making the repository publicly available. Following this decision, ./play.it source code has been hosted successively by many shared forge platforms:
GitHub, that we all know of, choosing it was more a short-term fallback than a long-term decision ;
some Gogs instance, which was hosted by debian-fr.xyz, a community the main ./play.it author is close to ;
Framagit, a famous instance of the infamous GitLab forge, hosted by Framasoft.
Dedicated forge
As development progressed, ./play.it began to increase its need for resources, dividing its code into several repositories to improve the workflow of the different aspects of the projects, adding continuous integration tests and their constraints, etc. A furious desire to understand the nooks and crannies behind a forge platform was the last deciding factor towards hosting a dedicated forge. So it happened, we deployed a forge platform on a dedicated server, hugely benefiting from the tremendous work achieved by the GitLab's package Debian Maintainers team. In return, we tried to contribute our findings in improving this software packaging. That was not expected, but this migration happened just a little time before the announcement “Déframasoftisons Internet !” (French article) about the planned end of Framagit. This dedicated instance used to be hosted on a VPS rented from Digital Ocean until the second half of July 2020, and since then has been moved to another VPS, rented from Hetzner. The specifications are similar, as well as the service, but thanks to this migration our hosting costs have been cut in half. Keeping in mind that this is paid by a single person, so any little donation helps a lot on this front. ;) To the surprise of our system administrator, this last migration took only a couple hours with no service interruption reported by our users.
Forge access
This new forge can be found at forge.dotslashplay.it. Registrations are open to the public, but we ask you to not abuse this, the main restriction being that we do not wish to host projects unrelated to ./play.it. Of course exceptions are made for our active contributors, who are allowed to host some personal projects there. So, if you wish to use this forge to host your own work, you first need to make some significant contributions to ./play.it.
API
The collection of supported games growing endlessly, we have started the development of a public API allowing access to lots of information related to ./play.it. This API, which is not yet stabilized, is simply an interface to a versioned database containing all the ./play.it scripts, handled archives, games installable through the project. Relations are, of course, handled between those items, enabling its use for requests like : « What packages are required on my system to install Cæsar Ⅲ ? » or « What are the free (as in beer) games handled via DOSBox ? ». Originally developed as support for the new, in-development, Web site (we'll talk about it later on), this API should facilitate the development of tools around ./play.it. For example, it'll be useful for whomever would like to build a complete video game handling software (downloading, installation, starting, etc.) using ./play.it as one of its building bricks. For those curious about the technical side, it's an API based on Lumeneffectuant that makes requests on a MariaDB database, all self-hosted on a Debian Sid. Not only is the code of the API versioned on our forge, but also the structure and content of the databases, which will allow those who desired it to install a local version easily.
New website
Based on the aforementioned API, a new website is under development and will replace our current website based on DokuWiki. Indeed, if the lack of database and the plain text files structure of DokuWiki seemed at first attractive, as ./play.it supported only a handful of games (link in French), this feature became more inconvenient as the library of ./play.it supported games grew. We shall make an in-depth presentation of this website for the 2.13 release of ./play.it, but a public demo of the development version from our forge is already available. If you feel like providing an helping hand on this task, some priority tasks have been identified to allow opening a new Web site able to replace the current one. And for those interested in technical details, this web Site was developed in PHP using the framework Laravel. The current in-development version is hosted for now on the same Debian Sid than the API.
GUI
A regular comment that is done about the project is that, if the purpose is to make installing games accessible to everyone without technical skills, having to run scripts in the terminal remains somewhat intimidating. Our answer until now has been that while the project itself doesn't aim to providing a graphical interface (KISS principle "Keep it simple, stupid"), still and always), but that it would be relatively easy to, later on, develop a graphical front-end to it. Well, it happens that is now reality. Around the time of our latest publication, one of our contributors, using the API we just talked about, developed a small prototype that is usable enough to warrant a little shout out. :-) In practice, it is some small Python 3 code (an HCI completely in POSIX shell is for a later date :-°), using GTK 3 (and still a VTE terminal to display the commands issued, but the user shouldn't have to input anything in it, except perhaps the root password to install some packages). This allowed to verify that, as we used to say, it would be relatively easy, since a script of less than 500 lines of code (written quickly over a week-end) was enough to do the job ! Of course, this graphical interface project stays independent from the main project, and is maintained in a specific repository. It seems interesting to us to promote it in order to ease the use of ./play.it, but this doesn't prevent any other similar projects to be born, for example using a different language or graphical toolkit (we, globally, don't have any particular affinity towards Python or GTK). The use of this HCI needs three steps : first, a list of available games is displayed, coming directly from our API. You just need to select in the list (optionally using the search bar) the game you want to install. Then it switches to a second display, which list the required files. If several alternatives are available, the user can select the one he wants to use. All those files must be in the same directory, the address bar on the top enabling to select which one to use (click on the open button on the top opens a filesystem navigation window). Once all those files available (if they can be downloaded, the software will do it automatically), you can move ahead to the third step, which is just watching ./play.it do its job :-) Once done, a simple click on the button on the bottom will run the game (even if, from this step, the game is fully integrated on your system as usual, you no longer need this tool to run it). To download potentially missing files, the HCI will use, depending on what's available on the system, either wget, curl or aria2c (this last one also handling torrents), of which the output will be displayed in the terminal of the third phase, just before running the scripts. For privilege escalation to install packages, sudo will be used preferentially if available (with the option to use a third-party application for password input, if the corresponding environment variable is set, which is more user-friendly), else su will be used. Of course, any suggestion for an improvement will be received with pleasure.
New games
Of course, such an announcement would not be complete without a list of the games that got added to our collection since the 2.11 release… So here you go:
7 Billion Humans
Agatha Christie: The ABC Murders
Age of Mythology Demo
Among the Sleep
Anomaly: Warzone Earth
Another Lost Phone: Lauraʼs Story
Assault Android Cactus
Baba Is You
Blade Runner
Bleed
Bleed 2
Blocks that matter (previously supported by ./play.it 1.x)
Butcher Demo
Capsized
Cayne
Cineris Somnia
Commandos 3: Destination Berlin
Diablo
Din’s Curse
Divine Divinity (previously supported by ./play.it 1.x)
Duet (previously supported by ./play.it 1.x)
Earthworm Jim
Edna & Harvey: The Breakout — Anniversary Edition
Element4l
Factorio — Demo
Finding Paradise
Firewatch
FlatOut 2
Forced
Forgotton Anne
Freelancer Demo
Frostpunk
Full Throttle Remastered
Giana Sisters: Twisted Dreams
Gibbous — A Cthulhu Adventure
Gorogoa
Indiana Jones and the Last Crusade
Into the Breach
Kerbal Space Program
LEGO Batman: The Videogame
Lego Harry Potter Years 1-4
Maniac Mansion
Metal Slug 3 (previously supported by ./play.it 1.x)
MIND: Path to thalamus
Minecraftn 4K
Minit
Monkey Island 4: Escape from Monkey Island
Multiwinia (previously supported by ./play.it 1.x)
Mushroom 11
Myst: Masterpiece Edition (previously supported by ./play.it 1.x)
Neverwinter Nights: Enhanced Edition
Overgrowth
Perimeter
Populous: Promised Lands (previously supported by ./play.it 1.x)
Populous 2 (previously supported by ./play.it 1.x)
Prison Architect
Q.U.B.E. 2
Quern — Undying Thoughts
Rayman Origins
Retro City Rampage (previously supported by ./play.it 1.x)
RiME
Satellite Reign (previously supported by ./play.it 1.x)
Star Wars: Knights of the Old Republic (previously supported by ./play.it 1.x)
Starship Titanic
SteamWorld Quest: Hand of Gilgamech
Stellaris
Ancient Relics Story Pack
Apocalypse
Arachnoid Portrait Pack
Distant Stars Story Pack
Federations
Horizon Signal
Humanoids Species Pack
Leviathans Story Pack
Lithoids Species Pack
Megacorp
Plantoids Species Pack
Synthetic Dawn Story Pack
Utopia
Strike Suit Zero
Sundered
Sunless Skies
Cyclopean Owl DLC
Symphony
Tangledeep
Tengami
Tetrobot and Co.
The Adventures of Shuggy
The Aquatic Adventure of the Last Human
The Count Lucanor
The First Tree
The Longing
The Pillars of the Earth
The Witcher (previously supported by ./play.it 1.x)
The Witcher 3: Wild Hunt
Tonight We Riot
Toren
Touhou Chireiden ~ Subterranean Animism — Demo
Touhou Hifuu Nightmare Diary ~ Violet Detector
Triple Triad Gold
Vambrace: Cold Soul
VVVVVV (previously supported by ./play.it 1.x)
War for the Overworld (the base game was already supported, new expansions have been added):
Heart of Gold
Seasonal Worker Skins
The Under Games
Warcraft: Orcs & Humans
Warhammer 40,000: Dawn of War — Winter Assault Demo
Warhammer 40,000: Gladius — Relics of War
Warlords Battlecry II (previously supported by ./play.it 1.x)
Wing Commander (previously supported by ./play.it 1.x)
Wing Commander II (previously supported by ./play.it 1.x)
Yooka Laylee
Zak McKracken and the Alien Mindbenders
If your favourite game is not supported by ./play.it yet, you should ask for it in the dedicated tracker on our forge. The only requirement to be a valid request is that there exists a version of the game that is not burdened by DRM.
What’s next?
Our team being inexhaustible, work on the future 2.13 version has already begun… A few major objectives of this next version are :
the complete and definitive relegation to the archive bin of ./play.it 1.14, which is still required for about twenty games ;
Ethereum on ARM. Raspberry Pi 4 images release based on Ubuntu 20.04 64 bit. Turn your Raspberry Pi 4 into an Eth 1.0 or Eth 2.0 node just by flashing the MicroSD card. Memory issues solved and new monitoring dashboards. Installation guide.
TL;DR:Flash your Raspberry Pi 4, plug in an ethernet cable, connect the SSD disk and power up the device to turn the Raspberry Pi 4 into a full Ethereum 1.0 node or an Ethereum 2.0 node (beacon chain / validator) Some background first. As you know, we’ve been running into some memory issues [1] with the Raspberry Pi 4 image as Raspbian OS is still on 32bits [2] (at least the userland). While we prefer to stick with the official OS we came to the conclusion that, in order to solve these issues, we need to migrate to a native 64 bits OS Besides, Eth 2.0 clients don’t support 32 bits binaries so using Raspbian would exclude the Raspberry Pi 4 from running an Eth 2.0 node (and the possibility of staking). So, after several tests we are now releasing 2 different images based on Ubuntu 20.04 64bit [3]: Eth 1.0 and Eth 2.0 editions. Basically, both are the same image and include the same features of the Raspbian based images. But they are setup for running Eth 1.0 or Eth 2.0 software by default Images take care of all the necessary steps, from setting up the environment and formatting the SSD disk to installing and running the Ethereum software as well as starting the blockchain synchronization.
Main features
Based on Ubuntu 20.04 64bit
Automatic USB disk partitioning and formatting
Adds swap memory (ZRAM kernel module + a swap file) based on Armbian work [7]
Changes the hostname to something like “ethnode-e2a3e6fe” based on MAC hash
Runs software as a systemd service and starts syncing the Blockchain
Includes an APT repository for installing and upgrading Ethereum software
Includes a monitoring dashboard based on Grafana / Prometheus
Software included
Both images include the same packages, the only difference between them is that Eth 1.0 runs Geth by default and Eth 2.0 runs Prysm beacon chain by default. Ethereum 1.0 clients
30303 Port forwarding (Eth 1.0) and 13000 port forwarding (Eth 2.0) [4]
A case with heatsink and fan (Optional but strongly recommended)
USB keyboard, Monitor and HDMI cable (micro-HDMI) (Optional)
Storage
You will need and SSD to run the Ethereum clients (without an SSD drive there’s absolutely no chance of syncing the Ethereum blockchain). There are 2 options:
Use a USB portable SSD disk such as the Samsung T5 Portable SSD.
Use a USB 3.0 External Hard Drive Case with a SSD Disk. In our case we used a Inateck 2.5 Hard Drive Enclosure FE2011. Make sure to buy a case with an UAS compliant chip, particularly, one of these: JMicron (JMS567 or JMS578) or ASMedia (ASM1153E).
In both cases, avoid getting low quality SSD disks as it is a key component of you node and it can drastically affect the performance (and sync times) Keep in mind that you need to plug the disk to an USB 3.0 port (blue)
Note: If you are not comfortable with command line or if you are running Windows, you can use Etcher (https://etcher.io) Open a terminal and check your MicroSD device name running:
sudo fdisk -l
You should see a device named mmcblk0 or sdd. Unzip and flash the image:
3.- Insert de MicroSD into the Raspberry Pi 4. Connect an Ethernet cable and attach the USB SSD disk (make sure you are using a blue port). 4.- Power on the device The Ubuntu OS will boot up in less than one minute but you will need to wait approximately 10 minutes in order to allow the script to perform the necessary tasks to turn the device into an Ethereum node and reboot the Raspberry. Depending on the image, you will be running:
Eth 1.0: Geth as the default client syncing the blockchain
Eth 2.0: Prysm as default client syncing the beacon chain (Topaz testnet)
5.- Log in You can log in through SSH or using the console (if you have a monitor and keyboard attached)
User: ethereum Password: ethereum
You will be prompted to change the password on first login, so you will need to login twice. 6.- Open 30303 port for Geth and 13000 if you are running Prysm beacon chain. If you don’t know how to do this, google “port forwarding” followed by your router model. 7.- Getting console output You can see what’s happening in the background by typing:
sudo tail -f /valog/syslog
Congratulations. You are now running a full Ethereum node on your Raspberry Pi 4.
Syncing the Blockchain
Now you need to wait for the blockchain to be synced. In the case of Eth 1.0 This will take a few days depending on several factors but you can expect up to about 5-7 days. If you are running the Eth 2.0 Topaz tesnet you can expect 1-2 days of Beacon chain synchronization time. Remember that you will need to setup the validator later in order to start the staking process (see “How to run the Eth 2.0 validator” section below).
Monitoring dashboards
For this first release, we included 3 monitoring dashboards based on Prometheus [5] / Grafana [6] in order to monitor the node and clients’ data (Geth and Besu). You can access through your web browser:
All clients run as a systemd service. This is important because in case of some problem arises the system will respawn the process automatically. Geth and Prysm beacon chain run by default (depending on what you are synchronizing, Eth 1.0 or Eth 2.0) so, if you want to switch to other clients (from Geth to Nethermind, for instance), you need to stop and disable Geth first, and enable and start the other client:
Clients’ config files are located in the /etc/ethereum/ directory. You can edit these files and restart the systemd service in order for the changes to take effect. The only exception is Nethermind which, additionally, has a mainnet config file located here:
/etc/nethermind/configs/mainnet.cfg
Blockchain clients’ data is stored on the ethereum home account as follows (note the dot before the directory name): Eth 1.0
/home/ethereum/.eth2 /home/ethereum/.eth2validators /home/ethereum/.lighthouse Hyperledger Besu and Nethermind
Nethermind and Hyperledger Besu
These 2 great Eth 1.0 clients have become a great alternative to Geth and Parity. The more diversity in the network, the better, so you may give them a try and contribute to the network health. Both need further testing so feel free to play with them and report back your feedback.
How to run the Eth 2.0 validator (staking)
Once the Topaz testnet beacon chain is synchronized you can run a validator in the same device. You will need to follow the steps described here: https://prylabs.net/participate The first time, you need to create manually an account by running the “validator” binary and setup a password. Once you completed this step you can add the password to /etc/ethereum/prysm-validator.conf and start the validator as a systemd service
Feeback appreciated
We put a lot of work trying to setup the Raspberry Pi 4 as a full Ethereum node as we know the massive user base of this device may have a very positive impact in the network. Please, take into account that this is the first image based on Ubuntu 20.04 so there may be some bugs. If so, open an issue on Github or reach us on twitter (https://twitter.com/EthereumOnARM).
I really enjoyed m4nz's recent post: Getting into DevOps as a beginner is tricky - My 50 cents to help with it and wanted to do my own version of it, in hopes that it might help beginners as well. I agree with most of their advice and recommend folks check it out if you haven't yet, but I wanted to provide more of a simple list of things to learn and tools to use to compliment their solid advice.
Background
While I went to college and got a degree, it wasn't in computer science. I simply developed an interest in Linux and Free & Open Source Software as a hobby. I set up a home server and home theater PC before smart TV's and Roku were really a thing simply because I thought it was cool and interesting and enjoyed the novelty of it. Fast forward a few years and basically I was just tired of being poor lol. I had heard on the now defunct Linux Action Show podcast about linuxacademy.com and how people had had success with getting Linux jobs despite not having a degree by taking the courses there and acquiring certifications. I took a course, got the basic LPI Linux Essentials Certification, then got lucky by landing literally the first Linux job I applied for at a consulting firm as a junior sysadmin. Without a CS degree, any real experience, and 1 measly certification, I figured I had to level up my skills as quickly as possible and this is where I really started to get into DevOps tools and methodologies. I now have 5 years experience in the IT world, most of it doing DevOps/SRE work.
Certifications
People have varying opinions on the relevance and worth of certifications. If you already have a CS degree or experience then they're probably not needed unless their structure and challenge would be a good motivation for you to learn more. Without experience or a CS degree, you'll probably need a few to break into the IT world unless you know someone or have something else to prove your skills, like a github profile with lots of open source contributions, or a non-profit you built a website for or something like that. Regardless of their efficacy at judging a candidate's ability to actually do DevOps/sysadmin work, they can absolutely help you get hired in my experience. Right now, these are the certs I would recommend beginners pursue. You don't necessarily need all of them to get a job (I got started with just the first one on this list), and any real world experience you can get will be worth more than any number of certs imo (both in terms of knowledge gained and in increasing your prospects of getting hired), but this is a good starting place to help you plan out what certs you want to pursue. Some hiring managers and DevOps professionals don't care at all about certs, some folks will place way too much emphasis on them ... it all depends on the company and the person interviewing you. In my experience I feel that they absolutely helped me advance my career. If you feel you don't need them, that's cool too ... they're a lot of work so skip them if you can of course lol.
LPI Linux Essentials - basic multiple choice test on Linux basics. Fairly easy especially if you have nix experience, otherwise I'd recommend a taking a course like I did. linuxacademy worked for me, but there are other sites out there that can help. For this one, you can probably get by just searching youtube for the topics covered on the test.
Linux Foundation Certified System Administrator - This one is a hands on test which is great, you do a screen share with a proctor and ssh into their server; then you have a list of objectives to accomplish on the server pretty much however you see fit. Write a big bash script to do it all, do like 100 mv commands manually, write a small program in python lol, whatever you want so long as you accomplish the goals in time.
Amazon Web Services certs - I would go for the all 3 associate level certs if you can: Solutions Architect, SysOps Administrator, Developer. These are quite tedious to study for as they can be more a certification that you know which AWS products to get your client to use than they are a test of your cloud knowledge at times. For better or worse, AWS is the top cloud provider at the moment so showing you have knowledge there opens you up to the most jobs. If you know you want to work with another cloud provider then the Google certs can be swapped out here, for example. I know that with the AWS certs, I get offers all the time for companies that use GCP even though I have no real experience there. Folks with the google certs: is the reverse true for you? (genuinely asking, it would be useful for beginners to know).
Certified Kubernetes Administrator - I don't actually have this cert since at this point in my career I have real Kubernetes experience on my resume so it's kind of not needed, but if you wanted learn Kubernetes and prove it to prospective employers it can help.
Tools and Experimentation
While certs can help you get hired, they won't make you a good DevOps Engineer or Site Reliability Engineer. The only way to get good, just like with anything else, is to practice. There are a lot of sub-areas in the DevOps world to specialize in ... though in my experience, especially at smaller companies, you'll be asked to do a little (or a lot) of all of them. Though definitely not exhaustive, here's a list of tools you'll want to gain experience with both as points on a resume and as trusty tools in your tool belt you can call on to solve problems. While there is plenty of "resume driven development" in the DevOps world, these tools are solving real problems that people encounter and struggle with all the time, i.e., you're not just learning them because they are cool and flashy, but because not knowing and using them is a giant pain!
Linux! - Unless you want to only work with Windows for some reason, Linux is the most important thing you can learn to become a good DevOps professional in my view. Install it on your personal laptop, try a bunch of different distributions, develop an opinion on systemd vs. other init systems ;), get a few cloud servers on DigitalOcean or AWS to mess around with, set up a home server, try different desktop environments and window managers, master a cli text editor, break your install and try to fix it, customize your desktop until it's unrecognizable lol. Just get as much experience with Linux as possible!
git - Aside from general Linux knowledge, git is one of the most important tool for DevOps/SREs to know in my view. A good DevOps team will usually practice "git ops," i.e., making changes to your CI/CD pipeline, infrastructure, or server provisioning will involve making a pull request against the appropriate git repo.
terraform - terraform is the de facto "infrastructure as code" tool in the DevOps world. Personally, I love it despite it's pain points. It's a great place to start once you have a good Linux and cloud knowledge foundation as it will allow you to easily and quickly bring up infrastructure to practice with the other tools on this list.
packer - While not hugely popular or widely used, it's such a simple and useful tool that I recommend you check it out. Packer lets you build "immutable server images" with all of the tools and configuration you need baked in, so that your servers come online ready to start working immediately without any further provisioning needed. Combined with terraform, you can bring up Kubernetes clusters with a single command, or any other fancy DevOps tools you want to play with.
ansible - With the advent of Kubernetes and container orchestration, "configuration management" has become somewhat less relevant ... or at least less of a flashy and popular topic. It is still something you should be familiar with and it absolutely is in wide use at many companies. Personally, I love the combination of ansible + packer + terraform and find it very useful. Chef and Puppet are nice too, but Ansible is the most popular last I checked so unless you have a preference (or already know Ruby) then I'd go with that.
jenkins - despite it's many, many flaws and pain points lol, Jenkins is still incredibly useful and widely used as a CI/CD solution and it's fairly easy to get started with. EDIT: Upon further consideration, Jenkins may not be the best choice for beginners to learn. At this point, you’re probably better off with something like GitLab: it’s a more powerful and useful tool, you’ll learn YAML for its config, and it’s less of a pain to use. If you know Jenkins that’s great and it will help you get a job probably, but then you might implement Jenkins since it’s what you know ... but if you have the chance, choose another tool.
postgres - Knowledge of SQL databases is very useful, both from a DBA standpoint and the operations side of things. You might be helping developers develop a new service and helping with setting up schema (or doing so yourself for an internal tool), or you might be spinning up an instance for devs to access, or even pinpointing that a SQL query is the bottleneck in an app's performance. I put Postgres here because that's what I personally use and have seen a lot in the industry, but experience with any SQL database will be useful.
nginx - nginx is commonly used an http server for simple services or as an ingress option for kubernetes. Learn the basic config options, how to do TLS, etc.
docker - Ah, the buzzword of yesteryear. Docker and containerization is still incredibly dominant as a paradigm in the DevOps world right now and it is paramount that you learn it and master it. Be comfortable writing Dockerfiles, troubleshooting docker networking, the fundamentals of how linux containers work ... and definitely get familiar with Alpine Linux as it will most likely be the base image for most of your company's docker images.
kubernetes - At many companies, DevOps EngineeSite Reliability Engineer effectively translates to "Kubernetes Babysitter," especially if you're new on the job. Container orchestration, while no longer truly "cutting edge" is still fairly new and there is high demand for people with knowledge and experience with it. Work through Kubernetes The Hard Way to bring up a cluster manually. Learn and know the various "primitives" like pods and replicasets. Learn about ingress and how to expose services.
There are many, many other DevOps tools I left out that are worthwhile (I didn't even touch the tools in the kubernetes space like helm and spinnaker). Definitely don't stop at this list! A good DevOps engineer is always looking to add useful tools to their tool belt. This industry changes so quickly, it's hard to keep up. That's why it's important to also learn the "why" of each of these tools, so that you can determine which tool would best solve a particular problem. Nearly everything on this list could be swapped for another tool to accomplish the same goals. The ones I listed are simply the most common/popular and so are a good place to start for beginners.
Programming Languages
Any language you learn will be useful and make you a better sysadmin/DevOps Eng/SRE, but these are the 3 I would recommend that beginners target first.
Bash - It's right there in your terminal and for better or worse, a scarily large amount of the world's IT infrastructure depends on ill-conceived and poorly commented bash scripts. It's bash scripts all the way down. I joke, but bash is an incredibly powerful tool and a great place to start learning programming basics like control flow and variables.
Python - It has a beautiful syntax, it's easy to learn, and the python shell makes it quick to learn the basics. Many companies have large repos of python scripts used by operations for automating all sorts of things. Also, many older DevOps tools (like ansible) are written in python.
Go - Go makes for a great first "systems language" in that it's quite powerful and gives you access to some low level functionality, but the syntax is simple, explicit and easy to understand. It's also fast, compiles to static binaries, has a strong type system and it's easier to learn than C or C++ or Rust. Also, most modern DevOps tools are written in Go. If the documentation isn't answering your question and the logs aren't clear enough, nothing beats being able to go to the source code of a tool for troubleshooting.
Expanding your knowledge
As m4nz correctly pointed out in their post, while knowledge of and experience with popular DevOps tools is important; nothing beats in-depth knowledge of the underlying systems. The more you can learn about Linux, operating system design, distributed systems, git concepts, language design, networking (it's always DNS ;) the better. Yes, all the tools listed above are extremely useful and will help you do your job, but it helps to know why we use those tools in the first place. What problems are they solving? The solutions to many production problems have already been automated away for the most part: kubernetes will restart a failed service automatically, automated testing catches many common bugs, etc. ... but that means that sometimes the solution to the issue you're troubleshooting will be quite esoteric. Occam's razor still applies, and it's usually the simplest explanation that works; but sometimes the problem really is at the kernel level. The biggest innovations in the IT world are generally ones of abstractions: config management abstracts away tedious server provisioning, cloud providers abstract away the data center, containers abstract away the OS level, container orchestration abstracts away the node and cluster level, etc. Understanding what it happening beneath each layer of abstraction is crucial. It gives you a "big picture" of how everything fits together and why things are the way they are; and it allows you to place new tools and information into the big picture so you'll know why they'd be useful or whether or not they'd work for your company and team before you've even looked in-depth at them. Anyway, I hope that helps. I'll be happy to answer any beginnegetting started questions that folks have! I don't care to argue about this or that point in my post, but if you have a better suggestion or additional advice then please just add it here in the comments or in your own post! A good DevOps Eng/SRE freely shares their knowledge so that we can all improve.
Our vision to bring the world together through play has never been more relevant than it is now. As our founder and CEO, David Baszucki (a.k.a. Builderman), mentioned in his keynote, more and more people are using Roblox to stay connected with their friends and loved ones. He hinted at a future where, with our automatic machine translation technology, Roblox will one day act as a universal translator, enabling people from different cultures and backgrounds to connect and learn from each other. During his keynote, Builderman also elaborated upon our vision to build the Metaverse; the future of avatar creation on the platform (infinitely customizable avatars that allow any body, any clothing, and any animation to come together seamlessly); more personalized game discovery; and simulating large social gatherings (like concerts, graduations, conferences, etc.) with tens of thousands of participants all in one server. We’re still very early on in this journey, but if these past five months have shown us anything, it’s clear that there is a growing need for human co-experience platforms like Roblox that allow people to play, create, learn, work, and share experiences together in a safe, civil 3D immersive space. Up next, our VP of Developer Relations, Matt Curtis (a.k.a. m4rrh3w), shared an update on all the things we’re doing to continue empowering developers to create innovative and exciting content through collaboration, support, and expertise. He also highlighted some of the impressive milestones our creator community has achieved since last year’s RDC. Here are a few key takeaways:
Adopt Me! now has over 10 billion plays and surpassed 1.6 million concurrent users.
Piggy, launched in January 2020, has close to 5 billion visits in just over six months.
Developers are on track to earn over $250 million in 2020.
In June 2020, developers earned nearly $2 million from Premium Payouts, which rewards them based on the amount of engagement time Premium subscribers spend in their games.
There are now 345,000 developers on the platform who are monetizing their games.
And lastly, our VP of Engineering, Technology, Adam Miller (a.k.a. rbadam), unveiled a myriad of cool and upcoming features developers will someday be able to sink their teeth into. We saw a glimpse of procedural skies, skinned meshes, more high-quality materials, new terrain types, more fonts in Studio, a new asset type for in-game videos, haptic feedback on mobile, real-time CSG operations, and many more awesome tools that will unlock the potential for even bigger, more immersive experiences on Roblox.
Vibin’
Despite the virtual setting, RDC just wouldn’t have been the same without any fun party activities and networking opportunities. So, we invited special guests DJ Hyper Potions and cyber mentalist Colin Cloud for some truly awesome, truly mind-bending entertainment. Yoga instructor Erin Gilmore also swung by to inspire attendees to get out of their chair and get their body moving. And of course, we even had virtual rooms dedicated to karaoke and head-to-head social games, like trivia and Pictionary. Over on the networking side, Team Adopt Me, Red Manta, StyLiS Studios, and Summit Studios hosted a virtual booth for attendees to ask questions, submit resumes, and more. We also had a networking session where three participants would be randomly grouped together to get to know each other.
What does Roblox mean to you?
We all know how talented the Roblox community is from your creations. We’ve heard plenty of stories over the years about how Roblox has touched your lives, how you’ve made friendships, learned new skills, or simply found a place where you can be yourself. We wanted to hear more. So, we asked attendees: What does Roblox mean to you? How has Roblox connected you? How has Roblox changed your life? Then, over the course of RDC, we incorporated your responses into this awesome mural. 📷 Created by Alece Birnbach at Graphic Recording Studio
Knowledge is power
This year’s breakout sessions included presentations from Roblox developers and staff members on the latest game development strategies, a deep dive into the Roblox engine, learning how to animate with Blender, tools for working together in teams, building performant game worlds, and the new Creator Dashboard. Dr. Michael Rich, Associate Professor at Harvard Medical School and Physician at Boston Children’s Hospital, also led attendees through a discussion on mental health and how to best take care of you and your friends’ emotional well-being, especially now during these challenging times. 📷 Making the Dream Work with Teamwork (presented by Roblox developer Myzta) In addition to our traditional Q&A panel with top product and engineering leaders at Roblox, we also held a special session with Builderman himself to answer the community’s biggest questions. 📷 Roblox Product and Engineering Q&A Panel
2020 Game Jam
The Game Jam is always one of our favorite events of RDC. It’s a chance for folks to come together, flex their development skills, and come up with wildly inventive game ideas that really push the boundaries of what’s possible on Roblox. We had over 60 submissions this year—a new RDC record. Once again, teams of up to six people from around the world had less than 24 hours to conceptualize, design, and publish a game based on the theme “2020 Vision,” all while working remotely no less! To achieve such a feat is nothing short of awe-inspiring, but as always, our dev community was more than up for the challenge. I’ve got to say, these were some of the finest creations we’ve seen. WINNERS Best in Show:Shapescape Created By: GhettoMilkMan, dayzeedog, maplestick, theloudscream, Brick_man, ilyannnaYou awaken in a strange laboratory, seemingly with no way out. Using a pair of special glasses, players must solve a series of anamorphic puzzles and optical illusions to make their escape. Excellence in Visual Art:agn●sia Created By: boatbomber, thisfall, ElttobAn obby experience unlike any other, this game is all about seeing the world through a different lens. Reveal platforms by switching between different colored lenses and make your way to the end. Most Creative Gameplay:Visions of a perspective reality Created By: Noble_Draconian and SpathiSometimes all it takes is a change in perspective to solve challenges. By switching between 2D and 3D perspectives, players can maneuver around obstacles or find new ways to reach the end of each level. Outstanding Use of Tech:The Eyes of Providence Created By: Quenty, Arch_Mage, AlgyLacey, xJennyBeanx, Zomebody, CrykeeThis action/strategy game comes with a unique VR twist. While teams fight to construct the superior monument, two VR players can support their minions by collecting resources and manipulating the map. Best Use of Theme:Sticker Situation Created By: dragonfrosting and YozohSet in a mysterious art gallery, players must solve puzzles by manipulating the environment using a magic camera and stickers. Snap a photograph, place down a sticker, and see how it changes the world. OTHER TOP PICKS
Improving Simulation and Performance with an Advanced Physics Solver
August
05, 2020
bychefdeletat PRODUCT & TECH 📷In mid-2015, Roblox unveiled a major upgrade to its physics engine: the Projected Gauss-Seidel (PGS) physics solver. For the first year, the new solver was optional and provided improved fidelity and greater performance compared to the previously used spring solver. In 2016, we added support for a diverse set of new physics constraints, incentivizing developers to migrate to the new solver and extending the creative capabilities of the physics engine. Any new places used the PGS solver by default, with the option of reverting back to the classic solver. We ironed out some stability issues associated with high mass differences and complex mechanisms by the introduction of the hybrid LDL-PGS solver in mid-2018. This made the old solver obsolete, and it was completely disabled in 2019, automatically migrating all places to the PGS. In 2019, the performance was further improved using multi-threading that splits the simulation into jobs consisting of connected islands of simulating parts. We still had performance issues related to the LDL that we finally resolved in early 2020. The physics engine is still being improved and optimized for performance, and we plan on adding new features for the foreseeable future.
Implementing the Laws of Physics
📷 The main objective of a physics engine is to simulate the motion of bodies in a virtual environment. In our physics engine, we care about bodies that are rigid, that collide and have constraints with each other. A physics engine is organized into two phases: collision detection and solving. Collision detection finds intersections between geometries associated with the rigid bodies, generating appropriate collision information such as collision points, normals and penetration depths. Then a solver updates the motion of rigid bodies under the influence of the collisions that were detected and constraints that were provided by the user. 📷 The motion is the result of the solver interpreting the laws of physics, such as conservation of energy and momentum. But doing this 100% accurately is prohibitively expensive, and the trick to simulating it in real-time is to approximate to increase performance, as long as the result is physically realistic. As long as the basic laws of motion are maintained within a reasonable tolerance, this tradeoff is completely acceptable for a computer game simulation.
Taking Small Steps
The main idea of the physics engine is to discretize the motion using time-stepping. The equations of motion of constrained and unconstrained rigid bodies are very difficult to integrate directly and accurately. The discretization subdivides the motion into small time increments, where the equations are simplified and linearized making it possible to solve them approximately. This means that during each time step the motion of the relevant parts of rigid bodies that are involved in a constraint is linearly approximated. 📷📷 Although a linearized problem is easier to solve, it produces drift in a simulation containing non-linear behaviors, like rotational motion. Later we’ll see mitigation methods that help reduce the drift and make the simulation more plausible.
Solving
📷 Having linearized the equations of motion for a time step, we end up needing to solve a linear system or linear complementarity problem (LCP). These systems can be arbitrarily large and can still be quite expensive to solve exactly. Again the trick is to find an approximate solution using a faster method. A modern method to approximately solve an LCP with good convergence properties is the Projected Gauss-Seidel (PGS). It is an iterative method, meaning that with each iteration the approximate solution is brought closer to the true solution, and its final accuracy depends on the number of iterations. 📷 This animation shows how a PGS solver changes the positions of the bodies at each step of the iteration process, the objective being to find the positions that respect the ball and socket constraints while preserving the center of mass at each step (this is a type of positional solver used by the IK dragger). Although this example has a simple analytical solution, it’s a good demonstration of the idea behind the PGS. At each step, the solver fixes one of the constraints and lets the other be violated. After a few iterations, the bodies are very close to their correct positions. A characteristic of this method is how some rigid bodies seem to vibrate around their final position, especially when coupling interactions with heavier bodies. If we don’t do enough iterations, the yellow part might be left in a visibly invalid state where one of its two constraints is dramatically violated. This is called the high mass ratio problem, and it has been the bane of physics engines as it causes instabilities and explosions. If we do too many iterations, the solver becomes too slow, if we don’t it becomes unstable. Balancing the two sides has been a painful and long process.
Mitigation Strategies
📷A solver has two major sources of inaccuracies: time-stepping and iterative solving (there is also floating point drift but it’s minor compared to the first two). These inaccuracies introduce errors in the simulation causing it to drift from the correct path. Some of this drift is tolerable like slightly different velocities or energy loss, but some are not like instabilities, large energy gains or dislocated constraints. Therefore a lot of the complexity in the solver comes from the implementation of methods to minimize the impact of computational inaccuracies. Our final implementation uses some traditional and some novel mitigation strategies:
Warm starting: starting with the solution from a previous time-step to increase the convergence rate of the iterative solver
Post-stabilization: reprojecting the system back to the constraint manifold to prevent constraint drift
Regularization: adding compliance to the constraints ensuring a solution exists and is unique
Pre-conditioning: using an exact solution to a linear subsystem, improving the stability of complex mechanisms
Strategies 1, 2 and 3 are pretty traditional, but 3 has been improved and perfected by us. Also, although 4 is not unheard of, we haven’t seen any practical implementation of it. We use an original factorization method for large sparse constraint matrices and a new efficient way of combining it with the PGS. The resulting implementation is only slightly slower compared to pure PGS but ensures that the linear system coming from equality constraints is solved exactly. Consequently, the equality constraints suffer only from drift coming from the time discretization. Details on our methods are contained in my GDC 2020 presentation. Currently, we are investigating direct methods applied to inequality constraints and collisions.
Getting More Details
Traditionally there are two mathematical models for articulated mechanisms: there are reduced coordinate methods spearheaded by Featherstone, that parametrize the degrees of freedom at each joint, and there are full coordinate methods that use a Lagrangian formulation. We use the second formulation as it is less restrictive and requires much simpler mathematics and implementation. The Roblox engine uses analytical methods to compute the dynamic response of constraints, as opposed to penalty methods that were used before. Analytics methods were initially introduced in Baraff 1989, where they are used to treat both equality and non-equality constraints in a consistent manner. Baraff observed that the contact model can be formulated using quadratic programming, and he provided a heuristic solution method (which is not the method we use in our solver). Instead of using force-based formulation, we use an impulse-based formulation in velocity space, originally introduced by Mirtich-Canny 1995 and further improved by Stewart-Trinkle 1996, which unifies the treatment of different contact types and guarantees the existence of a solution for contacts with friction. At each timestep, the constraints and collisions are maintained by applying instantaneous changes in velocities due to constraint impulses. An excellent explanation of why impulse-based simulation is superior is contained in the GDC presentation of Catto 2014. The frictionless contacts are modeled using a linear complementarity problem (LCP) as described in Baraff 1994. Friction is added as a non-linear projection onto the friction cone, interleaved with the iterations of the Projected Gauss-Seidel. The numerical drift that introduces positional errors in the constraints is resolved using a post-stabilization technique using pseudo-velocities introduced by Cline-Pai 2003. It involves solving a second LCP in the position space, which projects the system back to the constraint manifold. The LCPs are solved using a PGS / Impulse Solver popularized by Catto 2005 (also see Catto 2009). This method is iterative and considers each individual constraints in sequence and resolves it independently. Over many iterations, and in ideal conditions, the system converges to a global solution. Additionally, high mass ratio issues in equality constraints are ironed out by preconditioning the PGS using the sparse LDL decomposition of the constraint matrix of equality constraints. Dense submatrices of the constraint matrix are sparsified using a method we call Body Splitting. This is similar to the LDL decomposition used in Baraff 1996, but allows more general mechanical systems, and solves the system in constraint space. For more information, you can see my GDC 2020 presentation. The architecture of our solver follows the idea of Guendelman-Bridson-Fedkiw, where the velocity and position stepping are separated by the constraint resolution. Our time sequencing is:
Advance velocities
Constraint resolution in velocity space and position space
Advance positions
This scheme has the advantage of integrating only valid velocities, and limiting latency in external force application but allowing a small amount of perceived constraint violation due to numerical drift. An excellent reference for rigid body simulation is the book Erleben 2005 that was recently made freely available. You can find online lectures about physics-based animation, a blog by Nilson Souto on building a physics engine, a very good GDC presentation by Erin Catto on modern solver methods, and forums like the Bullet Physics Forum and GameDev which are excellent places to ask questions.
byRandomTruffle PRODUCT & TECH Every non-trivial program has at least some amount of global state, but too much can be a bad thing. In C++ (which constitutes close to 100% of Roblox’s engine code) this global state is initialized before main() and destroyed after returning from main(), and this happens in a mostly non-deterministic order. In addition to leading to confusing startup and shutdown semantics that are difficult to reason about (or change), it can also lead to severe instability. Roblox code also creates a lot of long-running detached threads (threads which are never joined and just run until they decide to stop, which might be never). These two things together have a very serious negative interaction on shutdown, because long-running threads continue accessing the global state that is being destroyed. This can lead to elevated crash rates, test suite flakiness, and just general instability. The first step to digging yourself out of a mess like this is to understand the extent of the problem, so in this post I’m going to talk about one technique you can use to gain visibility into your global startup flow. I’m also going to discuss how we are using this to improve stability across the entire Roblox game engine platform by decreasing our use of global variables.
Introducing -finstrument-functions
Nothing excites me more than learning about a new obscure compiler option that I’ve never had a use for before, so I was pretty happy when a colleague pointed me to this option in the Clang Command Line Reference. I’d never used it before, but it sounded very cool. The idea being that if we could get the compiler to tell us every time it entered and exited a function, we could filter this information through a symbolizer of some kind and generate a report of functions that a) occur before main(), and b) are the very first function in the call-stack (indicating it’s a global). Unfortunately, the documentation basically just tells you that the option exists with no mention of how to use it or if it even actually does what it sounds like it does. There’s also two different options that sound similar to each other (-finstrument-functions and -finstrument-functions-after-inlining), and I still wasn’t entirely sure what the difference was. So I decided to throw up a quick sample on godbolt to see what happened, which you can see here. Note there are two assembly outputs for the same source listing. One uses the first option and the other uses the second option, and we can compare the assembly output to understand the differences. We can gather a few takeaways from this sample:
The compiler is injecting calls to __cyg_profile_func_enter and __cyg_profile_func_exit inside of every function, inline or not.
The only difference between the two options occurs at the call-site of an inline function.
With -finstrument-functions, the instrumentation for the inlined function is inserted at the call-site, whereas with -finstrument-functions-after-inlining we only have instrumentation for the outer function. This means that when using-finstrument-functions-after-inlining you won’t be able to determine which functions are inlined and where.
Of course, this sounds exactly like what the documentation said it did, but sometimes you just need to look under the hood to convince yourself. To put all of this another way, if we want to know about calls to inline functions in this trace we need to use -finstrument-functions because otherwise their instrumentation is silently removed by the compiler. Sadly, I was never able to get -finstrument-functions to work on a real example. I would always end up with linker errors deep in the Standard C++ Library which I was unable to figure out. My best guess is that inlining is often a heuristic, and this can somehow lead to subtle ODR (one-definition rule) violations when the optimizer makes different inlining decisions from different translation units. Luckily global constructors (which is what we care about) cannot possibly be inlined anyway, so this wasn’t a problem. I suppose I should also mention that I still got tons of linker errors with -finstrument-functions-after-inlining as well, but I did figure those out. As best as I can tell, this option seems to imply –whole-archive linker semantics. Discussion of –whole-archive is outside the scope of this blog post, but suffice it to say that I fixed it by using linker groups (e.g. -Wl,–start-group and -Wl,–end-group) on the compiler command line. I was a bit surprised that we didn’t get these same linker errors without this option and still don’t totally understand why. If you happen to know why this option would change linker semantics, please let me know in the comments!
Implementing the Callback Hooks
If you’re astute, you may be wondering what in the world __cyg_profile_func_enter and __cyg_profile_func_exit are and why the program is even successfully linking in the first without giving undefined symbol reference errors, since the compiler is apparently trying to call some function we’ve never defined. Luckily, there are some options that allow us to see inside the linker’s algorithm so we can find out where it’s getting this symbol from to begin with. Specifically, -y should tell us how the linker is resolving . We’ll try it with a dummy program first and a symbol that we’ve defined ourselves, then we’ll try it with __cyg_profile_func_enter .
[email protected]:~/src/sandbox$ cat instr.cppint main() {}[email protected]:~/src/sandbox$ clang++-9 -fuse-ld=lld -Wl,-y -Wl,main instr.cpp/usbin/../lib/gcc/x86_64-linux-gnu/crt1.o: reference to main/tmp/instr-5b6c60.o: definition of main
No surprises here. The C Runtime Library references main(), and our object file defines it. Now let’s see what happens with __cyg_profile_func_enter and -finstrument-functions-after-inlining.
[email protected]:~/src/sandbox$ clang++-9 -fuse-ld=lld-finstrument-functions-after-inlining -Wl,-y -Wl,__cyg_profile_func_enter instr.cpp/tmp/instr-8157b3.o: reference to __cyg_profile_func_enter/lib/x86_64-linux-gnu/libc.so.6: shared definition of __cyg_profile_func_enter
Now, we see that libc provides the definition, and our object file references it. Linking works a bit differently on Unix-y platforms than it does on Windows, but basically this means that if we define this function ourselves in our cpp file, the linker will just automatically prefer it over the shared library version. Working godbolt link without runtime output is here. So now you can kind of see where this is going, however there are still a couple of problems left to solve.
We don’t want to do this for a full run of the program. We want to stop as soon as we reach main.
We need a way to symbolize this trace.
The first problem is easy to solve. All we need to do is compare the address of the function being called to the address of main, and set a flag indicating we should stop tracing henceforth. (Note that taking the address of main is undefined behavior[1], but for our purposes it gets the job done, and we aren’t shipping this code, so ¯\_(ツ)_/¯). The second problem probably deserves a little more discussion though.
Symbolizing the Traces
In order to symbolize these traces, we need two things. First, we need to store the trace somewhere on persistent storage. We can’t expect to symbolize in real time with any kind of reasonable performance. You can write some C code to save the trace to some magic filename, or you can do what I did and just write it to stderr (this way you can pipe stderr to some file when you run it). Second, and perhaps more importantly, for every address we need to write out the full path to the module the address belongs to. Your program loads many shared libraries, and in order to translate an address into a symbol, we have to know which shared library or executable the address actually belongs to. In addition, we have to be careful to write out the address of the symbol in the file on disk. When your program is running, the operating system could have loaded it anywhere in memory. And if we’re going to symbolize it after the fact we need to make sure we can still reference it after the information about where it was loaded in memory is lost. The linux function dladdr() gives us both pieces of information we need. A working godbolt sample with the exact implementation of our instrumentation hooks as they appear in our codebase can be found here.
Putting it All Together
Now that we have a file in this format saved on disk, all we need to do is symbolize the addresses. addr2line is one option, but I went with llvm-symbolizer as I find it more robust. I wrote a Python script to parse the file and symbolize each address, then print it in the same “visual” hierarchical format that the original output file is in. There are various options for filtering the resulting symbol list so that you can clean up the output to include only things that are interesting for your case. For example, I filtered out any globals that have boost:: in their name, because I can’t exactly go rewrite boost to not use global variables. The script isn’t as simple as you would think, because simply crawling each line and symbolizing it would be unacceptably slow (when I tried this, it took over 2 hours before I finally killed the process). This is because the same address might appear thousands of times, and there’s no reason to run llvm-symbolizer against the same address multiple times. So there’s a lot of smarts in there to pre-process the address list and eliminate duplicates. I won’t discuss the implementation in more detail because it isn’t super interesting. But I’ll do even better and provide the source! So after all of this, we can run any one of our internal targets to get the call tree, run it through the script, and then get output like this (actual output from a Roblox process, source file information removed):
excluded_symbols = [‘.\boost.*’]* excluded_modules = [‘/usr.\’]* /uslib/x86_64-linux-gnu/libLLVM-9.so.1: 140 unique addressesInterestingRobloxProcess: 38928 unique addresses/uslib/x86_64-linux-gnu/libstdc++.so.6: 1 unique addresses/uslib/x86_64-linux-gnu/libc++.so.1: 3 unique addressesPrinting call tree with depth 2 for 29276 global variables.__cxx_global_var_init.5 (InterestingFile1.cpp:418:22)RBX::InterestingRobloxClass2::InterestingRobloxClass2() (InterestingFile2.cpp.:415:0)__cxx_global_var_init.19 (InterestingFile2.cpp:183:34)(anonymous namespace)::InterestingRobloxClass2::InterestingRobloxClass2()(InterestingFile2.cpp:171:0)__cxx_global_var_init.274 (InterestingFile3.cpp:2364:33)RBX::InterestingRobloxClass3::InterestingRobloxClass3()
So there you have it: the first half of the battle is over. I can run this script on every platform, compare results to understand what order our globals are actually initialized in in practice, then slowly migrate this code out of global initializers and into main where it can be deterministic and explicit.
Future Work
It occurred to me sometime after implementing this that we could make a general purpose profiling hook that exposed some public symbols (dllexport’ed if you speak Windows), and allowed a plugin module to hook into this dynamically. This plugin module could filter addresses using whatever arbitrary logic that it was interested in. One interesting use case I came up for this is that it could look up the debug information, check if the current address maps to the constructor of a function local static, and write out the address if so. This effectively allows us to gain a deeper understanding of the order in which our lazy statics are initialized. The possibilities are endless here.
Further Reading
If you’re interested in this kind of thing, I’ve collected a couple of my favorite references for this kind of topic.
Neither Roblox Corporation nor this blog endorses or supports any company or service. Also, no guarantees or promises are made regarding the accuracy, reliability or completeness of the information contained in this blog.
In addition to the free binary robot software, you will need to get a real account with a broker. The software will normally recommend binary options brokers to open an account and deposit with. Programme The Software. We don't mean that you need to be a programmer to operate the software, but you do need to tell it what you want. Set your technical indicators which will include your investing ... Optimal for Binary Options and Forex. For beginners and intermediates. Lotus is 100% non repaint. The strategy is based on 4 ADX parameters, including pivot points and candle patterns. It is recommendable to combine the arrow with an moving average, Elliot waves indicator or Parabolic SAR. The main goal was to keep it as simple as possible ! No ... It is really the first fully 100% automated binary options auto trader. The robot does what many other applications claim to make. It is the first really fully automatic automated trading software only designed for binary options investors. If you are binary options investor, the robot is almost mandatory for you. Otherwise you will have no ... Products offered through the Binary.com website include binary options, contracts for difference ("CFD") and other complex derivatives. Binary options trading may not be suitable for everyone. CFD trading has a high level of risk because leverage can work well in making profits as well as losses. As a result, the products offered on this website may not be suitable for all investors where you ... Not all automated binary options trading software is created equal. You, the trader should have some control over the automated settings before you allow it to trade with your money. You should be able to set the trade amount, the assets you are willing to allow the system to trade, and the indicators that the automated binary options trading software will use to generate signals. Our ... Binary Options Robot is a free automatic trading software, which has a mission to trade in a whole new and different way. There are few steps that have to be set up before the trader can start trading and he can easily let the Robot work instead of him! This auto trading software claims to be highly successful because its technology relies on analysing technical and fundamental methods of ... Free and web-based – binary options trading platforms that charge money simply to access their software ring alarm bells. They often do this because the software has to be downloaded and installed locally on your computer. Most traders do not like either of these things, i.e., neither the fact they have to pay for the software nor the fact they have to install something on their computer ... Binary Options software mostly used by novice traders who want to earn a quick profit. In reality, it is quite a bit different from the expectation, you may have from Binary Options Robot or Binary Options software. So far our estimation shows that the win rate can be more than 70% with binary options auto trading tools, which is beyond the satisfactory level. At the same time, without having ... non repaint binary indicator. Binary options trading system. Earlier than they observe there may be something wrong they may have already got made many trades. Download Legit binary options robots free. There are matters to look out binary options software for to try to save you this going on binary option This free software is a product of Binary Options Robot. This free PC software was developed to work on Windows XP, Windows Vista, Windows 7, Windows 8 or Windows 10 and is compatible with 32-bit systems. Binary Option Robot.exe is the common file name to indicate this program's installer. Our built-in antivirus scanned this download and rated it as 100% safe. From the developer: Automated ...
Binary Options Robot - Automated Binary Options Trading Using Binary Option Robot Test Binary Options Robot here - http://track.logic.expert/67b0b668-c6a4-42... For Free Live Signal, Please Visit: https://www.amtradingtips.com Contact Email: [email protected] For More Update Join Telegram Channel: https://t.me/... Most backtesting tools out there today are for forex - it's pretty hard to find one for binary options. So I developed this one for specifically backtesting binary options strategies on MT4 platform. IQ Option auto trading software super profitable binary options Robot! Includes super profitable strategy, signals, money management! Works for any existent accounts. Very easy to use. WhatsApp ... The road to success through trading IQ option Best Bot Reviews Iq Option 2020 ,We make videos using this softwhere bot which aims to make it easier for you t... based on - Free Download Binary Option Bot- Robot// Auto Trading Signal Software 2019 hindi -----... How to use this software ( I will provide Free Video guide) 1. Open software 2. Select Asset ( EUR/USD) 3. Select Trading time ( 1 Minute or More) 4. Enter Profit percentage of your asset ( Ex: 91 ... 100% FREE SIGNAL For IQ Option - binary options