7 Tricks To Be Professional At Binary Options Trading ...

What's new in macOS 11, Big Sur!

It's that time of year again, and we've got a new version of macOS on our hands! This year we've finally jumped off the 10.xx naming scheme and now going to 11! And with that, a lot has changed under the hood in macOS.
As with previous years, we'll be going over what's changed in macOS and what you should be aware of as a macOS and Hackintosh enthusiast.

Has Nvidia Support finally arrived?

Sadly every year I have to answer the obligatory question, no there is no new Nvidia support. Currently Nvidia's Kepler line is the only natively supported gen.
However macOS 11 makes some interesting changes to the boot process, specifically moving GPU drivers into stage 2 of booting. Why this is relevant is due to Apple's initial reason for killing off Web Drivers: Secure boot. What I mean is that secure boot cannot work with Nvidia's Web Drivers due to how early Nvidia's drivers have to initialize at, and thus Apple refused to sign the binaries. With Big Sur, there could be 3rd party GPUs however the chances are still super slim but slightly higher than with 10.14 and 10.15.

What has changed on the surface

A whole new iOS-like UI

Love it or hate it, we've got a new UI more reminiscent of iOS 14 with hints of skeuomorphism(A somewhat subtle call back to previous mac UIs which have neat details in the icons)
You can check out Apple's site to get a better idea:

macOS Snapshotting

A feature initially baked into APFS back in 2017 with the release of macOS 10.13, High Sierra, now macOS's main System volume has become both read-only and snapshotted. What this means is:
However there are a few things to note with this new enforcement of snapshotting:

What has changed under the hood

Quite a few things actually! Both in good and bad ways unfortunately.

New Kernel Cache system: KernelCollections!

So for the past 15 years, macOS has been using the Prelinked Kernel as a form of Kernel and Kext caching. And with macOS Big Sur's new Read-only, snapshot based system volume, a new version of caching has be developed: KernelCollections!
How this differs to previous OSes:

Secure Boot Changes

With regards to Secure Boot, now all officially supported Macs will also now support some form of Secure Boot even if there's no T2 present. This is now done in 2 stages:
While technically these security features are optional and can be disabled after installation, many features including OS updates will no longer work reliably once disabled. This is due to the heavy reliance of snapshots for OS updates, as mentioned above and so we highly encourage all users to ensure at minimum SecureBootModel is set to Default or higher.

No more symbols required

This point is the most important part, as this is what we use for kext injection in OpenCore. Currently Apple has left symbols in place seemingly for debugging purposes however this is a bit worrying as Apple could outright remove symbols in later versions of macOS. But for Big Sur's cycle, we'll be good on that end however we'll be keeping an eye on future releases of macOS.

New Kernel Requirements

With this update, the AvoidRuntimeDefrag Booter quirk in OpenCore broke. Because of this, the macOS kernel will fall flat when trying to boot. Reason for this is due to cpu_count_enabled_logical_processors requiring the MADT (APIC) table, and so OpenCore will now ensure this table is made accessible to the kernel. Users will however need a build of OpenCore 0.6.0 with commit bb12f5f or newer to resolve this issue.
Additionally, both Kernel Allocation requirements and Secure Boot have also broken with Big Sur due to the new caching system discussed above. Thankfully these have also been resolved in OpenCore 0.6.3.
To check your OpenCore version, run the following in terminal:
nvram 4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:opencore-version
If you're not up-to-date and running OpenCore 0.6.3+, see here on how to upgrade OpenCore: Updating OpenCore, Kexts and macOS

Broken Kexts in Big Sur

Unfortunately with the aforementioned KernelCollections, some kexts have unfortunately broken or have been hindered in some way. The main kexts that currently have issues are anything relying on Lilu's userspace patching functionality:
Thankfully most important kexts rely on kernelspace patcher which is now in fact working again.

MSI Navi installer Bug Resolved

For those receiving boot failures in the installer due to having an MSI Navi GPU installed, macOS Big Sur has finally resolved this issue!

New AMD OS X Kernel Patches

For those running on AMD-Based CPUs, you'll want to also update your kernel patches as well since patches have been rewritten for macOS Big Sur support:

Other notable Hackintosh issues

Several SMBIOS have been dropped

Big Sur dropped a few Ivy Bridge and Haswell based SMBIOS from macOS, so see below that yours wasn't dropped:
If your SMBIOS was supported in Catalina and isn't included above, you're good to go! We also have a more in-depth page here: Choosing the right SMBIOS
For those wanting a simple translation for their Ivy and Haswell Machines:

Dropped hardware

Currently only certain hardware has been officially dropped:

Extra long install process

Due to the new snapshot-based OS, installation now takes some extra time with sealing. If you get stuck at Forcing CS_RUNTIME for entitlement, do not shutdown. This will corrupt your install and break the sealing process, so please be patient.

X79 and X99 Boot issues

With Big Sur, IOPCIFamily went through a decent rewriting causing many X79 and X99 boards to fail to boot as well as panic on IOPCIFamily. To resolve this issue, you'll need to disable the unused uncore bridge:
You can also find prebuilts here for those who do not wish to compile the file themselves:

New RTC requirements

With macOS Big Sur, AppleRTC has become much more picky on making sure your OEM correctly mapped the RTC regions in your ACPI tables. This is mainly relevant on Intel's HEDT series boards, I documented how to patch said RTC regions in OpenCorePkg:
For those having boot issues on X99 and X299, this section is super important; you'll likely get stuck at PCI Configuration Begin. You can also find prebuilts here for those who do not wish to compile the file themselves:

SATA Issues

For some reason, Apple removed the AppleIntelPchSeriesAHCI class from AppleAHCIPort.kext. Due to the outright removal of the class, trying to spoof to another ID (generally done by SATA-unsupported.kext) can fail for many and create instability for others. * A partial fix is to block Big Sur's AppleAHCIPort.kext and inject Catalina's version with any conflicting symbols being patched. You can find a sample kext here: Catalina's patched AppleAHCIPort.kext * This will work in both Catalina and Big Sur so you can remove SATA-unsupported if you want. However we recommend setting the MinKernel value to 20.0.0 to avoid any potential issues.

Legacy GPU Patches currently unavailable

Due to major changes in many frameworks around GPUs, those using ASentientBot's legacy GPU patches are currently out of luck. We either recommend users with these older GPUs stay on Catalina until further developments arise or buy an officially supported GPU

What’s new in the Hackintosh scene?

Dortania: a new organization has appeared

As many of you have probably noticed, a new organization focusing on documenting the hackintoshing process has appeared. Originally under my alias, Khronokernel, I started to transition my guides over to this new family as a way to concentrate the vast amount of information around Hackintoshes to both ease users and give a single trusted source for information.
We work quite closely with the community and developers to ensure information's correct, up-to-date and of the best standards. While not perfect in every way, we hope to be the go-to resource for reliable Hackintosh information.
And for the times our information is either outdated, missing context or generally needs improving, we have our bug tracker to allow the community to more easily bring attention to issues and speak directly with the authors:

Dortania's Build Repo

For those who either want to run the lastest builds of a kext or need an easy way to test old builds of something, Dortania's Build Repo is for you!
Kexts here are built right after commit, and currently supports most of Acidanthera's kexts and some 3rd party devs as well. If you'd like to add support for more kexts, feel free to PR: Build Repo source

True legacy macOS Support!

As of OpenCore's latest versioning, 0.6.2, you can now boot every version of x86-based builds of OS X/macOS! A huge achievement on @Goldfish64's part, we now support every major version of kernel cache both 32 and 64-bit wise. This means machines like Yonah and newer should work great with OpenCore and you can even relive the old days of OS X like OS X 10.4!
And Dortania guides have been updated accordingly to accommodate for builds of those eras, we hope you get as much enjoyment going back as we did working on this project!

Intel Wireless: More native than ever!

Another amazing step forward in the Hackintosh community, near-native Intel Wifi support! Thanks to the endless work on many contributors of the OpenIntelWireless project, we can now use Apple's built-in IO80211 framework to have near identical support to those of Broadcom wireless cards including features like network access in recovery and control center support.
For more info on the developments, please see the itlwm project on GitHub: itlwm

Clover's revival? A frankestien of a bootloader

As many in the community have seen, a new bootloader popped up back in April of 2019 called OpenCore. This bootloader was made by the same people behind projects such as Lilu, WhateverGreen, AppleALC and many other extremely important utilities for both the Mac and Hackintosh community. OpenCore's design had been properly thought out with security auditing and proper road mapping laid down, it was clear that this was to be the next stage of hackintoshing for the years we have left with x86.
And now lets bring this back to the old crowd favorite, Clover. Clover has been having a rough time of recent both with the community and stability wise, with many devs jumping ship to OpenCore and Clover's stability breaking more and more with C++ rewrites, it was clear Clover was on its last legs. Interestingly enough, the community didn't want Clover to die, similarly to how Chameleon lived on through Enoch. And thus, we now have the Clover OpenCore integration project(Now merged into Master with r5123+).
The goal is to combine OpenCore into Clover allowing the project to live a bit longer, as Clover's current state can no longer boot macOS Big Sur or older versions of OS X such as 10.6. As of writing, this project seems to be a bit confusing as there seems to be little reason to actually support Clover. Many of Clover's properties have feature-parity in OpenCore and trying to combine both C++ and C ruins many of the features and benefits either languages provide. The main feature OpenCore does not support is macOS-only ACPI injection, however the reasoning is covered here: Does OpenCore always inject SMBIOS and ACPI data into other OSes?

Death of x86 and the future of Hackintoshing

With macOS Big Sur, a big turning point is about to happen with Apple and their Macs. As we know it, Apple will be shifting to in-house designed Apple Silicon Macs(Really just ARM) and thus x86 machines will slowly be phased out of their lineup within 2 years.
What does this mean for both x86 based Macs and Hackintoshing in general? Well we can expect about 5 years of proper OS support for the iMac20,x series which released earlier this year with an extra 2 years of security updates. After this, Apple will most likely stop shipping x86 builds of macOS and hackintoshing as we know it will have passed away.
For those still in denial and hope something like ARM Hackintoshes will arrive, please consider the following:
So while we may be heart broken the journey is coming to a stop in the somewhat near future, hackintoshing will still be a time piece in Apple's history. So enjoy it now while we still can, and we here at Dortania will still continue supporting the community with our guides till the very end!

Getting ready for macOS 11, Big Sur

This will be your short run down if you skipped the above:
For the last 2, see here on how to update: Updating OpenCore, Kexts and macOS
In regards to downloading Big Sur, currently gibMacOS in macOS or Apple's own software updater are the most reliable methods for grabbing the installer. Windows and Linux support is still unknown so please stand by as we continue to look into this situation, macrecovery.py may be more reliable if you require the recovery package.
And as with every year, the first few weeks to months of a new OS release are painful in the community. We highly advise users to stay away from Big Sur for first time installers. The reason is that we cannot determine whether issues are Apple related or with your specific machine, so it's best to install and debug a machine on a known working OS before testing out the new and shiny.
For more in-depth troubleshooting with Big Sur, see here: OpenCore and macOS 11: Big Sur
submitted by dracoflar to hackintosh [link] [comments]

2.9.3 Stable update!

2.9.3 Stable update!
What is up Depthians!
We are back with another monstrous update as this one incorporates five beta test builds, so we have a lot to cover.
If you want to dive straight into the massive changelog/dissertation Click
We should probably start with the biggest change to From The Depths in this update and that is the change of fuel and ammo storage.
Quoting Nick, our lead developer
The change is quite simple: "remove ammo and fuel as separate resources. Weapons will consume materials directly, fuel engines and CJEs will burn materials directly".
Before I dig into why I think this is the right thing for FtD, I'd like to explain a few details.
Energy, fuel and ammo are still needed for your constructs.
We have changed the "ammo barrels (etc)" and "fuel tanks" so they are just alternative material storage containers, but with the following properties:
--"ammo barrels" now increase the maximum possible rate of usage of materials as "ammo" for reloading guns. They still explode.
--"fuel tanks" increase the maximum possible rate of use of materials as "fuel" for fuel engines and CJEs, with the future stretch goal of fuel tanks being flammable.
--So ammo racking is going to remain a feature of the game- vehicles that need to reload a large amount of materials may need additional ammo barrels
Ammo and oil processors are replaced ship-wide with existing material storage containers of the same size. They'll be made decorative blocks so you can still use them decoratively in future if you want to.
The oil refinery will be repurposed (described later in the patch notes)
There are two main reasons why I think this is the right move. Why it's right for the business and why it's right for the player.
Let's start with why I think it's right for the player:
Ammo and fuel containers are currently purchasable as either "empty or full". This is confusing when considered in the context of the campaign, story missions, custom battles, multiplayer matches...how do empty and full tanks behave in these modes? I'd need an hour to study the code and a small essay to explain it. That's not good game design.
Localised resources, when considering just the moving of material (and energy, if you want), becomes infinitely more manageable. The supply group system and the transit fleet system are not intuitive and for a lot of situations, their usage becomes fiddly and too complicated. We've replaced these systems with a new supply system that is much more intuitive for moving materials and energy around.
The UI is less cluttered now that ammo and fuel bars are not shown. This is not a minor point...it'll reduce the amount of data on screen by about 40% in a lot of the different views. It'll be so much easier to know at a glance if a particular fleet is running low on "materials" or doing fine. Is a transport ready to leave, or does it need to pick up more materials? Will a set of vehicles have enough materials for the next fight...this is so much easier with just one main resource type per vehicle.
When you or an enemy run out of ammo or fuel in a battle it's just frustrating. By combining fuel, ammo and materials for repairing you can guarantee that if someone runs out, the fight is going to be over quickly.
I imagine that deep down the majority of players would rather not have to create, stock and resupply fuel and ammo. I know that personally, the requirement to do this puts me off playing the campaign. By using a single material it still focuses the game on making efficient war machines, maintaining supply lines and growing your economy, but without the extra confusion of mat->ammo and mat-> fuel conversion.
Being able to assess weapons, engines and vehicles in terms of material cost and running cost is elegant.
Most grand strategy games and RTS games don't have localised resources, and many don't have more than 2 resource types to handle. Very few combine localised materials with multiple types.
Why it's right for the business:
The ammo and oil processors were created about 8 years ago. Boring single blocks that don't add much to the game. It's been our intention to add something similar to the oil refinery but for ammo creation. That's a lot of work and adds to the complexity of the logistical part of the game, which we feel is already a burden.
Making the localised resource supply system more user friendly to make it easy/natural/pleasant to move ammo, fuel and material around the map would require a lot of effort and, quite frankly, I'm not sure we'd ever manage it.
The complexity of the UI scares off a lot of our customers. The barriers to getting a gun firing or a boat moving will be lowered if a single material container can theoretically get everything working.
Running out of ammo/fuel in combat is a problem for our players. We want to find a solution to that, but it would take a lot of effort to do so. We also want the strategic AI to always enter a battle with enough ammo and fuel for the fight- that's another massive bunch of work.
The campaign's strategic AI has to work hard to get materials where it wants them. It's a bundle of work and added complexity to get NPC fleets to restock ammo and fuel as well.
We had proposed work to make resource dumps (from dead ships) contain ammo and fuel...again, that's more work, more bugs, more testing.
Certain game modes such as story missions, tournament mode, and multiplayer maps should theoretically allow the player to choose the amount of ammo or fuel stocked into their vehicles before the match begins. That's another bundle of work and added complexity we'd like to avoid.
Currently out of play units on the map can run out of fuel and will still continue to move "for free". It's exploitable and we don't have a solution to that...but if all the different out of play movement calculations are burning material, there will be no avoiding the cost.
The development effort can be much better spent polishing up other features that I actually believe in, rather than flogging the dead horse of logistical complexity in an attempt to make it interesting, approachable and fun for everyone (which I fundamentally don't think it would ever be).
Fundamentally I think that by winding back this feature we tie up a large number of loose ends and it results in a far more finished and enjoyable product.
And what's-more everyone on the development team agrees that we enjoy the game for fighting, looting and creating...not staring blankly at dozens of resource bars trying to figure out who needs to head back for more fuel and how long we need to wait for ammunition to process.
We've also simplified the resource transfer system. "Supply groups" and "Transit Fleets" have been replaced with a simple but comprehensive three-tier system. You can mark a vehicle as a "Creator", a "Cargo" or a "User". Creators fill up Cargos (and Users), Cargos give to Users (up to procurement levels). Users equalise their material with their neighbours, so do Creators, and there are a few handy transfers from Users back to Cargo and Creator to make sure they maintain their procurement levels as well. This system covers 95% of the way people were using the resource system and does it all semi-automatically. This simplification is much more possible now that materials are the only resource, as they invariably just need to flow from the resource zones to the front line, with everyone (Creators and Cargo) keeping what they need and passing the rest on. This new resource system also facilitates the long-range transport of materials from refinery to refinery, which is neat. The system also has an option, for Creator and Cargo types, to set their "supply chain index", so if you want to relay materials from output to output in order to accumulate them at a central location you can set the supply chain index to determine which way along the chain the materials will flow. It's all explained in the game.
After spending a lot of time with this new system from adventure to campaign and designer mode, the gameplay feels a little faster to get going and a little simpler for fleet management. As if you didn’t already know, you can shift+right click (with your supply construct selected) on the target construct / flagship of a fleet to keep supplied, keep holding down shift and right-click where you want to pick the resources up from and once again while not letting go of shift, shift+right click on the target construct/flag ship to finish the loop.
This would be done of course after setting up the settings Creator, Cargo and User.
Creator as an example is the harvesting construct, Cargo which would be the supply ship, User which would be a single target construct that uses the mats.
This will keep the supply ship target waypoint updated and therefore your supply ship will always head to the target construct no matter where it has moved to after setting up the loop.
You still need ammo and fuel boxes on your constructs, as these are governing the transfer rate / the speed that stock your turrets and fuel engine with the materials needed for them to run. You can run a construct without fuel or ammo boxes, however, once your APS clips are empty you will see a drop in your rate of fire as the material is not being transferred fast enough, this is the same for fuel engines and CJE.
Another change that goes hand in hand with resource management is the changes to fuel refineries.
In short:
Refineries on a force with greater than 1 million materials on it will begin refining the material into 'commodities' that are stored centrally. Commodities (AKA centralised materials) can be added by the player to any vehicle in allied territory, at any time.
Resource zones have a new feature too, and that is the ability to deactivate a resource zone on your owned tiles and if you own enough territory as you can see from the UI when double-clicking on the resource zone “Zone Deactivation”.
https://preview.redd.it/284w9khtt9t51.jpg?width=1920&format=pjpg&auto=webp&s=9dd61b06b2b6d0431bbb35c44a4d54563b81fbf0
Custom Jet Engines, have had some additional parts and new features.
We have the new ducted air intakes which as you can see have different attachment points
https://preview.redd.it/qaqeplmwt9t51.jpg?width=1920&format=pjpg&auto=webp&s=2ac2019d4b0c908019bf0ef0d53ad3a718fc4f4d
These ducted intakes allow you to have your CJE enclosed inside your construct enabling you to pass ducting through to access airflow outside.
https://preview.redd.it/pge1x43yt9t51.jpg?width=1920&format=pjpg&auto=webp&s=f2ee0cf35276f45feeb7320b29d844fa54776cdf
https://preview.redd.it/scych37zt9t51.jpg?width=1920&format=pjpg&auto=webp&s=1bf7559bc2379b692b7a318ba8f43708f5bba81e
And as you can see in the pic below they are enclosed and making use of the air duct intakes.
https://preview.redd.it/ucidv351u9t51.jpg?width=1920&format=pjpg&auto=webp&s=7d93e0c08d381fcaea2bcfc315c7b676f4006b51
You can also funnel the exhaust of your CJE's that would be under the waterline by using the two new connector blocks, a 90-degree corner and an extension piece which allows them to work as long as you funnel the exhaust out above the waterline.
https://preview.redd.it/aiofdee2u9t51.jpg?width=1920&format=pjpg&auto=webp&s=72c1dd2023195ef2337704d0547904031ad97e6c
PACs have also had a rework and new additions.
We now have the long-range lens which has a circular 10° field of fire, the close-range lens which has a circular 35° field of fire, the scatter lens which has a circular 30° field of fire, and the vertical lens which has a 10° horizontal / 60° vertical field of fire (good for AA). The other differences between them is the percentage of damage drop off at certain ranges, which is marked in their UI.
https://preview.redd.it/zvg2u0c5u9t51.jpg?width=1920&format=pjpg&auto=webp&s=567a2c4e092ea5fef62e67b051a74151e48b58d4
https://preview.redd.it/mboi63c5u9t51.jpg?width=1920&format=pjpg&auto=webp&s=78690d46df1466844cc38ff6b6623a30d910b726
One other awesome change to the PAC system is that melee lenses do not need to be hooked up to the now called long-range lens. Simply setup your melee head and snakey noodle PAC tubes with a terminator on the end, then link up to your other melee lens via Q in the drop-down menu. The scatter lens also deserves some attention here, as it can double up the number of beams if we increase the charge time max x4 at 30 seconds. The PAC system has had many tweaks which you should check up on in the changelogs.
Shields have also had some love. Projector shields reflect and laser scatter modes are now merged and have also had a slight buff to ricochet chance. Ring shields armour bonus has also increased by 50%.
We also have some new additions to APS in terms of coolers.
From left to right we now have an L shape, 4 way and a 5 way cooler.
https://preview.redd.it/lfi937e7u9t51.jpg?width=1920&format=pjpg&auto=webp&s=4ff99ceae914777137262754baa017300c2f4c1f
We now have some new wide wheel additions too for all you land vehicle lovers.
https://preview.redd.it/1ysi7u68u9t51.jpg?width=1920&format=pjpg&auto=webp&s=0760606aa3aebbde24a44fcb7319477453ee3b99
The next biggest change would be steam engines even though other changes will be implemented in this update. We are once again rehashing the whole system, which will be released in the following updates.
I had asked Weng a number of questions as to why the change was needed, why are the parts expensive, when and why would you use steam over fuel, and this is what he had to say:
Reason why steam changes are needed:
  • Steam was previously totally unbalanced and arbitrary. For example, 9 small boilers with 1 small piston was the optimal steam setup, which was more efficient and denser than almost all other engines; and turbine power generation only depended on its pressure, so compact turbines were always optimal.
  • It lacked many critical info in its UI.
  • It was hard to control the usage of steam

What's good with new steam:
  • A bit more of realism and complexity
  • Larger steam now generally have better efficiency and density than equivalent smaller steam
  • More useful info such as total power production, performance over time
  • Possibility to regulate steam usage with valves

Pros of steam compared to injector fuel:
  • Denser and more efficient
  • Even denser with turbines
  • Easier to fit into irregular space
  • Provides a buffer with flywheels or steam tanks
  • More efficient when used for propellers
  • Doesn't require fuel containers, uses material directly from any type of storage
  • Computationally less intensive
Cons of steam compared to fuel:

  • Still hard to regulate, so it's only useful when the power usage is constant or there's a buffer energy storage
  • Turbines waste energy when batteries are full
  • Crankshafts waste energy when reaching speed limit
  • More susceptible to damage (injector engines can often still run fine even when half of it is gone, steam can stop working when a single pipe is destroyed)
Why cost of parts is hilariously high: Steam engines have better efficiency and density (many players seem to forget that one) than injector engines. So a higher initial costs makes it less overpowered.
(In my opinion, the potential waste of energy is a major drawback of steam and justifies for its high potential power. But iirc Draba said that injector engines would be useless on designs that require a lot of power if steam doesn't have higher initial cost, which also makes sense.)
Problem with new steam that can't be fixed:
  • Many old designs are broken due to low power output
  • More complexity
Problems that can probably be fixed but I don't have a solution:
  • Inefficient steam engines are ridiculously bad (a bad steam engine is like 30 PPM and 50 PPV, while a good one is around 600 PPM and 110 PPV) (I tried to fix this and spent like 40 hours on that, but I only managed to make it easier to build a mediocre engine)
  • Cannot be simulated to calculate a stable power output, like fuel engines do (actually it's easy but would take a lot of time to do and I don't think it's necessary)

Another massive change is the detection rework which I also left a few questions for Ian AKA Blothorn to explain the system and how it works.
Why a change was warranted:
  • Different types of detection weren't well balanced--for instance, visual components had better accuracy than IR and vastly better range.
  • Detection autoadjust used an incorrect formula, so optimizing adjustment was both mechanical and tedious.
  • Trackers having much better detection ranges than search sensors meant that detection was very binary--if you could see something at all you could usually get a precise lock (barring ECM, which was only counterable by large numbers of components).
  • Needing both sensors and munitions warners made reactive missile defence difficult on small vehicles.
  • There were a number of other inconsistencies/imbalances, e.g. some visual/IR sensors working through water, steam engines producing no heat, etc.
Overview of the new system:
On the offensive side, each sensor type now has a role in which it is optimal, and large vehicles are best using a variety to cover their weaknesses. Visual probably remains the default for above-water detection--it remains impossible to reduce visual signature other than reducing size. IR is better against fast vehicles, as they have trouble avoiding high IR signatures from thrust and drag. Both visual and IR are weak in rangefinding (although coincidence rangefinders are adequate for most purposes); radar is correspondingly strong in range and weak in bearing, although it often offers better detection chances against vehicles that don't pay attention to radar stealth.
On the defensive side, there are two approaches. Most obvious is signature reduction--while it is deliberately difficult to avoid detection entirely, reducing signature reduces detection chances and thus degrades opposing accuracy. At short ranges, however, this doesn't work well--detection chances are likely high regardless, and low errors at short range mean even sparse detections can give a good fix. Smoke and chaff can be useful here: they increase detection chance while adding a distance-independent error to opponent's visual and radar sensors, respectively.
ECM, buoys, and radar guidance have also been reworked. Buoys are more powerful, becoming more accurate as they get closer to the target. While their base error is high, at long ranges a buoy at close range can beat the accuracy of any onboard sensor. If you worry about opponents’ buoys, ECM can now intermittently jam them--except if they are connected to their parent vehicle by a harpoon cable, in which case they don't need the vulnerable wireless connection.
Most blueprints should need no modifications under the new system, although a few may want a few more or less GPP cards. The one exception is water interactions--IR cameras, laser rangefinders, and retroreflection sensors can no longer work through water, so submarines that used them underwater or vehicles that used them to detect submarines will need to replace them (likely with buoys). Vehicles that predominantly used visual detection should also consider adding a greater variety of sensors--in particular, visual camera trackers tied to AA mainframes should likely be replaced with IR cameras. Also, radars and cameras can take over missile and projectile detection (radar is required for projectile detection), so munitions warners can be removed/replaced with additional sensors.
Last but not least a sweet little addition to our build menu prefabs.
https://preview.redd.it/iqw1ymabu9t51.png?width=1920&format=png&auto=webp&s=aa1e3cdba6e1d62e07aef83caf0acad2a39249ed
Please do make sure you go through the changelog as a hell of a lot has changed!
submitted by BaconsTV to FromTheDepths [link] [comments]

My computer freezes except when i am monitoring it

Hey, guys, sorry to bother you with this. I usually try to check if there are similar posts, or guides, but o boi. I will try to be detailed, not sure what matters or not, so sorry about that as well. The story is: I bought a computer for gaming last year. No problems at all for the whole time, except up until a month ago. I was playing Sekiro, and sometimes, it would randomly freeze the screen and sometimes continue or distort the audio. The computer freezes until restart. At the 3rd/4th attempt it would run normally as if nothing ever happened. Beat the game while this issue was going, np. After that, i decided to play amnesia: rebirth. Then my pc decided to go all out Johnny Sins on me and would crash every 2 minutes in, no escape. Since I usually try to pirate/configure a thing or two i thought it could be a malware. So I ran every single option of windows defender and malware bytes. One came up from a random game. Deleted it. Tried to repair windows. Followed several guides as for system restoration and scans. Checked for drivers and so on. Problem persisted. Eventually I was working from home and in the middle of it the problem decides to happen again. Crashing the whole computer, but not being able to turn it on correctly until the third reset. Oh, so that was how my journey was going. Windows bitchslaping me out of nowhere. So i slapped back and restored Windows. First saving the files and deleting programs. My computer gave zero fucks about it and the problem persisted. So I summoned my asshat mode and did a full restore. Reinstall windows, delete absolutely everything, clean all units, and pray for our lord and savior Shaggy to overlook the process. Since I am an atheist it didn't work. I installed just the geforce drivers and thought maybe it would run now. Also decided to download a newer version of the game. Guess what? Bingo bango bongo. The computer crashed within two minutes of game. Also crashed on spelunky 2 since I was trying to get angry at something else. Because why not.
By process of elimination I thought it could be the absolute only thing that I installed that was guilty: GeForce Experience and the drivers. Also looked several posts here and elsewhere and it appeared as a possibility. First turned off the grid, but kept it. Game lasted a little longer, still to no avail. Tried alternatively deleting it. Still crashed, but the noise on the computer changed for some reason. The coolers went randomly more active. The same after uninstalling anything related to nvidia. Same mockery from satan. I thought maybe i fucked up by even installing it, so yeap, you guessed it. System restoration again. I could almost listen to Steve Jobs laughing for not buying a mac for 20x the price. Damn you Steve. So i tried just running the game without any new drivers and see whats up. Dlls were missing, manually downloaded them. Still crashing. The random crashes using normal programs stopped after the restorations, so I thought it was something.
I tried checking for logs, crash reports, couldn't find any. So I downloaded a program that would actually look for any valid logs to analyze in case it was even more from my blunt incompetence. I didn't find anything. Even after the computer freezing and crashing with it on. I checked possibilities about bios. Looked up about firmwares, about anything else related to a solution or reason for these events. I ordered some things to actually clean the hardware, as it could be due to dust, or even my tears at this point in time. I am still waiting for it to arrive. Even if it is not the problem I am still in an abusive relationship with my computer and care about it.
Nothing seemed to be working. One possible issue could be overheating for some reason. But since the computer would crash in less then two minutes it seemed very unlikely. All coolers are working in good conditions. But welp. My hope was almost lost. If the cleaning didn't work, something about the hardware may be faulty, despite the computer's age. So i decided to simply go to the task manager. See if anything out of the ordinary was running. Nothing. As I wondered what in tarnation was going on with my life, i said fuck it and tried installing and updating every single driver. Also I decided to dual screen and while I played Amnesia, i would look at the machine's status in the task manager itself. At least the basics: CPU, memory, SSD, GPU, temperature. Also opened the resource monitoring from there. I was at this point looking for a technician, as sheer fucking stupidity and persistence seemed to not be bearing the best fruits.
And then. Just out of fucking nowhere, as a flaming humongous dick coming from the sky straight to my ass. It worked. For absolutely no fucking reason I managed to play for 45 minutes straight with absolutely no problems whatsoever. Was I dreaming? Was this the real life? What was life? I knew no more. But it worked. I slowly walked away hoping that nothing would change until the next day. Maybe if I don't look at it for too long it would smell my fear. Next day, worked normally, watched my classes, sucked at spelunky with zero problems. I was still not trusting this new reality. Something was off. Turned on amnesia. First plank out and my computer went to Neverland. I could almost hear the binary laugh from this little mf. It crashed several times for no reason whatsoever. Then I remembered my glimpse of hope the day before. It was one thousand percent bullshit, but hey, I have no dignity at this point in time. Turned on task manager and resource monitoring. It worked as if nothing wrong ever happened to society.
I was legit going to look for a technician and beg for money at the streets to pay for the repairs. But now it's just past this point. It's a matter of honor. Of values. Of dignity. So I came here to beg all of you good doers to assist me on my quest to understand this fucking bullshit in my life. This just can't be serious. I can't see a single reason why of all things this specific action would cause it to work normally, And I have no clue what else to do.
Thank you very much for your attention.
TL:DR _Computer is less then a year old and I take good care of it _Sometimes pirate programs, but try to look for the safest options very carefully _Computer froze and crashed while playing games (Sekiro, Amnesia: Rebirth, Spelunky 2 [more rarely] _Started crashing on regular programs such as Chrome _Restored the system _Erased every single file and cleaned the disk _Checked for virus (Windows defender, Malware bytes [all options available]) _Checked for issues with the driver itself and Ge force Experience _Crash noise changes after deleting mentioned program and drivers, but still crashes. _Checked bios and firmware versions _Tried with no new drivers, only manually installing missing dlls _Decided to update absolutely every single driver and windows to their latest versions _Downloaded a newer version of the game _Checked for logs _Downloaded a program to check for crashes, which found nothing even while on during a crash _Nothing weird on task manager _No new programs after the recovery (Exceptions: Chrome, Firefox, qBittorrent, Daemon tools lite, DS4 Windows) At none of those instances the problem would be solved _Open task manager to see info on CPU, SSD, GPU and temperature. Also open the Resource monitor The game suddenly works and never crashes again. Problem persists if those windows are closed or only opened during gameplay.
TL:DR of the TL:DR I am in pain, pls help
System configurations:
https://ibb.co/Qd6pst5 System: Windows 10 Pro - 20H2 - x64 Windows Feature Experience Pack 120.2212.31.0
P.S. I really don't know too much as I don't work with IT, so please, if you need any more info, or have any suggestions, I will try to answer as fast as possible. Sorry to cause any bother, and again, thank you for the attention.
submitted by MiddleShort9542 to techsupport [link] [comments]

An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

An introduction to Linux through Windows Subsystem for Linux

GitHub Pages link

Introduction and motivation

tl;dr skip to next section
So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.

Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help 
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
Finally, to upgrade all your packages, type in
sudo apt-get update 
And then
sudo apt-get upgrade 
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

3. Making things nice and crispy: an introduction to UNIX-based filesystems

tl;dr skip to the next section
The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Print Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them. This is a convention for specifying files that should be hidden by default, and ls, as well as most other commands, will honor this convention. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, which out-of-the-box functions kind of like a crummy Notepad (you can customize it infinitely though, and some people have insane vim setups. Appendix B has more info). /etc/passwd is a plaintext file that historically was used to store passwords back when encryption wasn't a big deal, but now instead stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash 
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash 
Next, press esc to exit insert mode, then type in the following:
:wq 
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//. ~ 
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the . at the end is cp-specific syntax that lets it copy anything, including hidden files, and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provide commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. Newer versions of rm fail when you type this in, but don't push your luck. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

Appendix A: brief intro to top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
  • bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
  • boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
  • dev: devices, provides files that allow Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
  • etc: no idea why it's called etc, but it contains system-wide configuration files
  • home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
  • lib: libraries used by the system
  • lib64 64-bit libraries used by the system
  • mnt: mount, where your drives are located
  • opt: third-party applications that (usually) don't have any dependencies outside the scope of their own package
  • proc: process information, contains runtime information about your system (e.g. memory, mounted devices, hardware configurations, etc)
  • run: directory for programs to store runtime information.
  • srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
  • sys: system, provides information about different I/O devices to the Linux Kernel. If dev files allows you to access I/O devices, sys files tells you information about these devices.
  • tmp: temporary, these are system runtime files that are (in most Linux distros) cleared out after every reboot. It's also sort of deprecated for security reasons, and programs will generally prefer to use run.
  • usr: contains additional UNIX commands, header files for compiling C programs, among other things. Kind of like bin but for less important programs. Most of everything you install using apt-get ends up here.
  • var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.
Also keep in mind that all of this is just convention. No Linux distribution needs to follow this file structure, and in fact almost all will deviate from what I just described. Hell, you could make your own Linux fork where /mnt/c information is stored in tmp.

Appendix B: random resources

EDIT: implemented various changes suggested in the comments. Thanks all!
submitted by HeavenBuilder to linux4noobs [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

MAME 0.223

MAME 0.223

MAME 0.223 has finally arrived, and what a release it is – there’s definitely something for everyone! Starting with some of the more esoteric additions, Linus Åkesson’s AVR-based hardware chiptune project and Power Ninja Action Challenge demos are now supported. These demos use minimal hardware to generate sound and/or video, relying on precise CPU timings to work. With this release, every hand-held LCD game from Nintendo’s Game & Watch and related lines is supported in MAME, with Donkey Kong Hockey bringing up the rear. Also of note is the Bassmate Computer fishing aid, made by Nintendo and marketed by Telko and other companies, which is clearly based on the dual-screen Game & Watch design. The steady stream of TV games hasn’t stopped, with a number of French releases from Conny/VideoJet among this month’s batch.
For the first time ever, games running on the Barcrest MPU4 video system are emulated well enough to be playable. Titles that are now working include several games based on the popular British TV game show The Crystal Maze, Adders and Ladders, The Mating Game, and Prize Tetris. In a clear win for MAME’s modular architecture, the breakthrough came through the discovery of a significant flaw in our Motorola MC6840 Programmable Timer Module emulation that was causing issues for the Fairlight CMI IIx synthesiser. In the same manner, the Busicom 141-PF desk calculator is now working, thanks to improvements made to Intel 4004 CPU emulation that came out of emulating the INTELLEC 4 development system and the prototype 4004-based controller board for Flicker pinball. The Busicom 141-PF is historically significant, being the first application of Intel’s first microprocessor.
Fans of classic vector arcade games are in for a treat this month. Former project coordinator Aaron Giles has contributed netlist-based sound emulation for thirteen Cinematronics vector games: Space War, Barrier, Star Hawk, Speed Freak, Star Castle, War of the Worlds, Sundance, Tail Gunner, Rip Off, Armor Attack, Warrior, Solar Quest and Boxing Bugs. This resolves long-standing issues with the previous simulation based on playing recorded samples. Colin Howell has also refined the sound emulation for Midway’s 280-ZZZAP and Gun Fight.
V.Smile joystick inputs are now working for all dumped cartridges, and with fixes for ROM bank selection the V.Smile Motion software is also usable. The accelerometer-based V.Smile Motion controller is not emulated, but the software can all be used with the standard V.Smile joystick controller. Another pair of systems with inputs that now work is the original Macintosh (128K/512K/512Ke) and Macintosh Plus. These systems’ keyboards are now fully emulated, including the separate numeric keypad available for the original Macintosh, the Macintosh Plus keyboard with integrated numeric keypad, and a few European ISO layout keyboards for the original Macintosh. There are still some emulation issues, but you can play Beyond Dark Castle with MAME’s Macintosh Plus emulation again.
In other home computer emulation news, MAME’s SAM Coupé driver now supports a number of peripherals that connect to the rear expansion port, a software list containing IRIX hard disk installations for SGI MIPS workstations has been added, and tape loading now works for the Specialist system (a DIY computer designed in the USSR).
Of course, there’s far more to enjoy, and you can read all about it in the whatsnew.txt file, or get the source and 64-bit Windows binary packages from the download page. (For brevity, promoted V.Smile software list entries and new Barcrest MPU4 clones made up from existing dumps have been omitted here.)

MAME Testers Bugs Fixed

New working machines

New working clones

Machines promoted to working

Clones promoted to working

New machines marked as NOT_WORKING

New clones marked as NOT_WORKING

New working software list additions

Software list items promoted to working

New NOT_WORKING software list additions

Merged pull requests

submitted by cuavas to emulation [link] [comments]

[OWL WATCH] AMA's SUMMARY

Disclaimer: This is my arbitrary summary for myself, so there could be some misunderstandings.
If you want the full picture, I recommend reading the full thread.
But, for a guy who just settles with 'less than perfect' summary, why not sharing my own?


Billy-IF
All the key research questions in coordicide have been answered. The challenges lying are implementing and testing our solution. We are implementing our solution into the Pollen Testnet and typing it up into our research specifications**(the specifications, while not complete, will hopefully be made publicly available soon).**
**After these tasks are done, our solution will go through a rigorous testing phase.**During this time, we will collect performance data, look for attack vectors, and tune the parameters.

domsch
the only way for IOTA and crypto-currencies in general to be adopted is via clear and strong regulatory guidelines and frameworks.
We often have the situation where a company reaches out to us and wants to use the IOTA token, but they are simply not able to due to uncertainties in regards to taxes, accounting, legal and regulatory questions.
The EU is taking a great stance with their new proposal (called MICA) to provide exactly this type of regulatory clarity and guidance we need. So we are very happy about that and see this as a great development for the adoption of IOTA.
We are very active in INATBA (in fact Julie is still on the board), are in the Executive Committee of the Digital Chamber of Commerce (https://digitalchamber.org) and are actively working with other regulatory bodies around the world. I think that especially in 2021, we will be much more pro-active with our outreach and efforts to push for more regulatory guidance (for the IOTA Token, for Tokenization, Smart Contracts, etc.). We are already talking with companies to start case studies around what it means to use the IOTA token - so that will be exciting.

domsch
actual product development, will really help us to convince regulators and lawmakers of what IOTA is intended for and where its potential lies.

DavidSonstebo
We are actively participating in regulatory matters via entities such as INATBA, as well as with local regulators in individual countries to help shape regulations to favor the adoption of crypto.
once the use cases can display real-world value, then deployments will happen regardless.

serguei_popov
"The multiverse" is quite an ingenious and promising idea that has many components. Actually, quite some of those are being incorporated to the Coordicide already now. The most "controversial" part, though, is the pure on-Tangle voting -- Hans thinks it should work fine while I think that it can be attacked

Billy-IF
Several of our modules have been devloped jointly with researchers in academia. For example, our rate control module is being developed jointly with professor Robert Shorten **and his team at Imperial College. Moreover,**our team has published several papers in peer reviewed journals and conference proceedings,
We are also making sure the entire protocol is audited. First, we have a grant given to Professor Mauro Conti specifcally to vet our solution.
you may hear an announcement regarding a similar grant to a second university.Second, eventually will offer bug bounties on our testnet. Lastly, we will hire some firm to audit our software and our protocol.

domsch
I would say that the entire enterprise and also the broader crypto-community is certainly actively following our developments around Coordicide**.**
Once that is removed, and with the introduction of Tokenization and Smart Contracts as Layer 2 solutions, there is no reason not to switch to IOTA.
there are probably even more who will reach out once we've achieved our objective of being production ready.

serguei_popov
Our objective is to have Honey ready within the first half of 2021.
we are very confident that Coordicide will happen in time.

Billy-IF
For Chrysalis, we will implement a deposit system. In order for an address to receive dust (which will be explicitly defined as any output with value less than a certain threshold), that address must already have a minimum balance (either 1 MIota or 1 KIota). The total ordering in conflict white flag makes this solution incredibly easy to implement.
this solution in the Coordicide needs alterations, because of the lack of total ordering.

HusQy_IOTA
Sharding is part of IOTA 3.0 and currently still in research.
there are of course some hard questions that need to be answered but we are pretty confident that these questions can and will be answered.

Billy-IF
**Having these layers helps keep the protocol modular and organized.****Indeed,****it is important to be able to track dependencies between the modules, particularly for standardization purposes.As your question suggests, a key component of standardization is the ability to update the standard(no living protocol is completely static).**Standardization will be accompanied by a versioning system, which tracks backwards compatibility.

Billy-IF
Well, let me try to clear these things up.
-The congestion control mechanisms are indifferent to the types of messages in the tangle. Thus non-value transactions (data messages) will be processed in the same way as value transactions (value messages). Thus, in times of congestion, a node will require mana in order to issue either of them.
-You will not need mana to simply “set up a node” and monitor the tangle.
However, in order to send transactions (or issue any messages) you will need mana in times of congestion.

IF_Dave
**The next big one is next month:**Odyssey Momentum; This is a huge multi-day DLT focussed hackathon with a lot of teams and big companies/governments involved working on solutions for the future. The IOTA Foundation is a Ecosystem member of Odyssey and we will be virtually present during the hackathon to help and guide teams working with IOTA.

Billy-IF
Coordicide will not fail. We are working very carefully to make sure that coordicide is a success, and we will not launch Iota 2.0 until it has gone through the proper testing.

domsch
Everyone internally and also our partners are very confident in the path that we've defined. Failure is not an option for us :)

HusQy_IOTA
We will most probably see a slight delay and see Nectar early 2021 instead.

DavidSonstebo
No, IF is not running out of money, this narrative has been repeated for 3 years now, yet we're still operating. Of course, bear markets impact our theoretical runway, but The IOTA Foundation is hard at work at diversifying revenue streams so that we become less and less dependent on the token holdings.

IF_Dave
We are constantly working on getting more exchanges to list IOTA, we however do not pay for listings
Some exchanges require a standard signature scheme
with the introduction of ed25519 in Chrysalis phase 2 that will be introduced and no longer be a restriction.

HusQy_IOTA
Being feeless is one of the most important aspects here since a new technology usually only gets adopted if it is either better or easier to use than existing solutions.
if it enables new use cases that would be completely impossible with the existing infrastructure. That is the single biggest reason why I think that IOTA will prevail.
An example for such a "new" use case is the Kupcrush use case presented by Terry

domsch
there are so many amazing use cases enabled with IOTA
I would say that****the most specific use cases which gets me really excited is conditional access control based on IOTA payments - in particular for the sharing economy.
IOTA Access + IOTA tokens really enable so many exciting new possibilities.

Billy-IF
In fact, with coordicide research coming to an end, we have already started to look into sharding**.**Indeed, sharding will provide the scalability needed to handle the demands of an IoT enabled world.

Billy-IF
We have designed Iota 2.0 to not have large concentrations of power. Unlike PoS systems, Iota will not be a block chain and thus will not be limited by a leader election process.
in a DAG, people can information in parallel, and so nodes with small amounts of mana can create messages at the same time with large mana holders.

Billy-IF
**In any DLT, "voting" needs a sybil protection system, and thus "voting power" is linked to some scarce resource.****Typically the allocation of any resource follows some sort of Zipf distribution, meaning that some people will have a lot, and others not.**The best we can do is to make sure that the little guys get their fair share of voting power.

HusQy_IOTA
With Chrysalis and coordicide we are finally moving to being production ready which will most probably also lead to a bigger market share as partners will start to use the technology which will increase the demand for tokens.

HusQy_IOTA
Privacy features are currently not being researched and it might be hard to support that on layer1 but privacy features could definitely be implemented as a 2nd layer solution

domsch
We focus on making the base layer of IOTA (namely transactional settlement) as secure and fast as possible. Many of the greater extensions to this core functionality are built on layer 2 (we already have Streams, Access, Identity and now also Smart Contracts)

HusQy_IOTA
There are discussions about increasing the supply to be able to still have micro transactions if the token would i.e. cost a few hundred dollars per MIOTA but we have not made a final decision, yet.

IF_Dave
We think we have a edge over other technology especially when it comes to fee-less transactions allowing a lot of use-cases that would otherwise be impractical or impossible.Adoption is not a given but a useful technology will be utilized with the right functionality,

DavidSonstebo
**why we have such a widespread strategy of driving IOTA, not only its development but in industry, academia, regulatory circles, raising awareness, funding ecosystem efforts etc.**I am confident in the position we are in right now.
There is a clear demand for financial disruption, data security, and automation.
someone has to assemble a killer application that meets the demand; IF is pushing for this with partners

Billy-IF
Our goal is to have at least 1000 TPS.

Billy-IF
Personally, I think our congestion control algorithm is our greatest innovation.
our algorithm can be used in any adversarial setting requiring fairness and consistency. Keep an eye out for a blog post that I am writing about it.

HusQy_IOTA
about proof of inclusion?
I have started implementing a proof of concept locally and the required data structures and payload types are already done but we won't be able to integrate this into goshimmer until we are done with the current refactoring of the code.

Jakub_Cech
**Many of the changes that are part of the Chrysalis would have made it and will make it into Coordicide.**Like the atomic transactions with binary layout. The approach we took was actually opposite - as in, what are the improvements we can already make in the current network without having to wait for Coordicide, and at the same time without disrupting or delaying Coordicide?

Billy-IF
All the key research questions in coordicide have been answered.
in reality, the biggest research challenges are behind us.

Jakub_Cech
When Chrysalis part 2 will be live?
We are still aiming for 2020****as still reflected at roadmap.iota.org. **We want to have a testnet where everyone can test things like the new APIs on, and some initial implementations of specific client libraries****to work with.**This will also allow us to test the node (both Hornet and Bee) implementations more in the wild.
The new wallet will also be tested on that testnet.
The whole testing phase will be a big endeavor, and, at the same time, we will also start auditing many of the implementations,

Billy-IF
We are in contact currently with OMG, and they are advising us on how to draft our specifications in order to ease the standardization process. Coordicide, or Iota 2.0, actually provides us a chance to start off with a clean state, since we are building it from the ground up with standardization in mind.

IF_Dave
The focus at this point is delivering Chrysalis and Coordicide. DeFi could possibly be done with Smart contracts at a given moment but it's not a focus point at this stage.

domsch
about price?
We are quite frankly not worried about that. Knowing everything that we have in the pipeline, our ecosystem and how everything around IOTA will mature over the next few months, I am sure that the entire crypto ecosystem will wake up to IOTA and its potential. **Many participants in the market still have outdated information from 2017 about us, so there is certainly some education to do.**But with Chrysalis and the Coordicide progress, all of that will change.

domsch
At the core of it, the IOTA Foundation is a leader in trust protocols and digital infrastructure.We will always remain a R&D organization at our core, as there is a lot more development we can lead when it comes to make our society and economy more fair, trustless and autonomous.
I certainly see us evolving into a broader think-tank and expert group to advise governments and large corporations on their strategies - in particular around data, identity and IoT.

HusQy_IOTA
barely any cryptocurrency gets used in the real world.
IOTA will soon start to actually be used in real world products and it is likely that this will also have an impact on the price (but I can't really give any details just yet).

domsch
ISCP (IOTA Smart Contract Protocol) is based on cryptographic consensus via BLS threshold signatures. That means a certain pre-defined amount of key holders have to come together to alter the state of the contract****or to send funds around. **If majority of the nodes are offline, the threshold will not be reached and the contract cannot be executed anymore.**There are various ways in how we are looking at this right now on how to make SC recovery and easy transitions possible.
**The beauty of ISCP is that we have a validator set which you can define (can be 3 or it can be 100+), and via an open selection process we can really ensure that the network will be fully decentralized and permissionless.Every smart contract committee (which will be its own network of course) is leveraging the IOTA ledger for security and to make it fully auditable and tamper-proof.**Which means that if a committee acts wrong, we have cryptographic proof of it and can take certain actions.
This makes our approach to smart contracts very elegant, secure and scalable.

Billy-IF
No, we will not standardize Iota 1.5. Yes, we do hope that standardization will help adoption by making it easier for corporates to learn our tech.

serguei_popov
In general, I also have to add that I'm really impressed by the force of our research department, and I think we have the necessary abilities to handle all future challenges that we might be facing.

Billy-IF
In coordicide, i.e. Iota 2.0, yes all nodes have to process all transactions and must receive all data. Our next major project is sharding, i.e. Iota 3.0 which will remove this requirement, and increase scalability.
FPC begins to be vulnerable to attack if the attacker has 30%-40% of the active consensus mana.

HusQy_IOTA
There is no doubt about coordicide working as envisioned.

HusQy_IOTA
When will companies fully implement iota tech?
Soon(TM) :P

Billy-IF
Well first, we are going to make sure that we dont need a plan B :) Second, our plans for the actual deployment are still under discussion. Lastly, we will make sure there is some sort of fail safe, e.g. turning the coordinator back on, or something like that.

Billy-IF
All the key research questions in coordicide have been answered, and each module is designed.

Billy-IF
What will be standardized is the behavior of the modules, particularly their interactions with other nodes and wallets. Implementation details will not be standardized. The standardization will allow anyone to build a node that can run on the IOTA 2.0 network.

DavidSonstebo
Tangle EE has its own Slack (private) and calls, so the lack of activity can probably be explained in that fashion. Coordicide will have an impact on all of IOTA :) There's certainly a lot of entities awaiting it, but most will start building already with Chrysalis v2, since it solves most pain points.

Billy-IF
If there are no conflicts, a message will be confirmed if it receives some approvals. We estimate that this should happen within 10-20 seconds.
To resolve a conflict, FPC will typically take another 4 minutes, according to our simulator. Since conflicts will not affect honest users, most transactions will have very short confirmation times.

Billy-IF
a colored coin supply cannot exceed that of all Iota. You could effectively mint a colored coin supply using a smart contract, although there would be performance downsides. There are no plans to increase the supply. The convergence to binary will not affect the supply nor anyone's balances.

HusQy_IOTA
Both, Radix and Avalanche have some similarities to IOTA:
- Avalanche has a similar voting scheme and also uses a DAG
- Radix uses a sharding approach that is similar to IOTAs "fluid sharding"
I don't really consider them to compete with our vision since both projects still rely on fees to make the network work.
Centralized solution can however never be feeless and being feeless is not just a "nice feature" but absolutely crucial for DLT to succeed in the real world.
Having fees makes things a lot easier and Coordicide would already be "done" if we could just use fees but I really believe that it is worth "going the extra mile" and build a system that is able to be better than existing tech.
submitted by btlkhs to Iota [link] [comments]

Pro Binary Option System Of 2017 Iq Option Daily Binary Options Trading with Powerful PT ... OMNI11 PRO Forex Binary Options System Binary Pro Strategies - YouTube BINARY OPTIONS TRADING SYSTEM - REAL ACCOUNT - Binary ... 99% PROFIT - FREE BOT - DOUBLE ZIGZAG - binary options ... Binary Options Doctor  Binary Options Strategy & Trading ... Binary Options Strategy 2020  100% WIN GUARANTEED ...

Binary Options Pro Master System With Bollinger Bands. Here’s a great binary options system with Bollinger Bands especially designed to trade the 1 min and 5 min timeframes with 5 minutes up to 30 minutes expiration time. Follow the instructions below to get the best possible results from this system. Winning rate can go as high as 82%. Download. Click here to download all strategies ... Binary options brokers will generally have their trading platform open when the market of the underlying asset is open. So if trading the NYSE, Nasdaq, DOW or S&P, the assets will be open to trade during the same hours as those markets are open. Any moves by the Federal reserve for example, will feed into binary markets immediately, just as you would expect. Binary Options Pro Signals is a fully automated trading system that claims it can accurately predict trading signals and earn users thousands of dollars within just minutes of activating their account. There is a 14-day trial offered for this system, however, you will be charged $14. After this time, should you continue to use the service, you will be charged $97 a month. Binary options brokers in the usa . NL. Форекс фиксированный спред . I. Powerball winning numbers for may 22 2013 . N. Test ps3 games for money . E. Texas hold em arcade game . Nar online games . Prosystem fx tax preparation software. To view a map of our current projects and locations – cooper. Iphone apps where you can win prizes. Byron kaverman online poker ... Binary Options Pro Signals is a program that is an extension of another Binary Options Signals product, but it differs from the latter by the supply of assets, since in addition to stocks it has other assets in spite of being products manufactured by the same trading company. What will be shown in this Binary Options Pro Signals review is that this system is really reliable with respect to its ... JTFX Premium v1.0 is a binary options trading software for every binary trader. The system is easy to use, install and provides consistent gains with little to no risk. PLUG & PLAY READY . Download the software, plug it onto the chart and start receiving signals. Binary5 is easy to use and Beginner friendly. USE YOUR BROKER. You don’t have to signup with a binary broker to get our software ... Über Diese Webseite - Binary Options Pro. Hallo, ich freue mich Sie hier auf meiner Webseite begrüßen zu dürfen, mein Name ist Benjamin Hübner und ich bin der Gründer dieser Webseite! Sollten Sie die Antwort auf eine der folgenden Fragen suchen…. Kann man wirklich mit binären Optionen dauerhaft Geld verdienen? Wie handelt man Binäre Optionen profitabel, was muss man beachten? Welche ... Most of you already know what binary options are, as you are here for the top tips and tricks on trading binary options. But, for the uninitiated, we will give a short brief. Binary Options are financial instruments that allow you to trade on all kinds of assets such as forex, stocks, futures, crypto, indices, and much more. There are only two ... FXProSystems.com is a Portal for Traders with a variety of trading tools (Forex and Binary Options Indicators, Trading Systems and Strategies for different trading styles, and also Expert Advisors) that can be downloaded absolutely free. On the website FXProSystems.com contains Indicators and Trading Systems for Forex and Binary Options. We regularly supplement our collection of trading tools ... Lesen Sie diesen Beitrag und erfahren Sie, wie gut VFX Alerts wirklich ist, wie man die Signale für binäre Optionen nutzt und ob das 30 USD pro Jahr… Binäre Leiter Optionen Bei Pocket Option Posted on September 21, 2018 September 26, 2018 by Benjamin Huebner Leave a comment

[index] [15408] [10151] [4329] [17173] [29386] [24667] [16272] [2223] [13057] [1591]

Pro Binary Option System Of 2017

The results have been incredible for this Forex Binary Options Trading System called Omni 11. The system has averaged up to $77,050 per month in system results of profit per trade in New York and ... This is how I have traded Binary for the past 3 years. Thank you for watching my videos, hit the subscribe button for more content. Check out our members res... The road to success through trading IQ option Best Bot Reviews Iq Option 2020 ,We make videos using this softwhere bot which aims to make it easier for you t... 2017 best binary option pro system. Don't be left out, opt in and make 2017 a better year financially. Money Must Be Made with Pro Binary Option System. WatchIq Option Daily Binary Options Trading with Powerful PT PRO Indicator, Trading day 264. The best options tutorial that can help you use Latest PT PRO In... At Binary Pro Strategies I teach people how to make a living trading Forex with Nadex Binary Options! We review our trades each day on YouTube to give our su... DOWNLOAD FREE http://bit.ly/2CSd0C0orCONECT WITH ME TO GET IT https://goo.gl/7tRX2nBINARY BOT FREE DOWNLOADbinary robot downloadbinary robot freebinary robot ... Binary Options Doctor will act as your guide to success in the Trading Industry to keep you safe from Scam Softwares and Unregulated Brokers who often shut t... BINARY OPTIONS TRADING SYSTEM - REAL ACCOUNT - Binary Options Brokers ★ TRY STRATEGY HERE http://iqopts.com/demo ★ WORK ON REAL MONEY http://iqopts.com/r... Hi Friends I will Show This Video Binary Options 60 Seconds Indicator Signal 99% Winning Live Trading Proof -----...

http://arab-binary-option.keyscheerax.gq