Jump to content

Windows 10 - Deeper Impressions


xper

Recommended Posts

And you can exclude the telemetry code by linking to notelemetry.obj.

I personally find VS2015's enhanced code coloring logic worth the switch all by itself.  And generally speaking it just feels like it helps you handle bigger, more complex projects.  However, there are other things where they've just changed the way things work (e.g., finding things) where it takes some getting used to, and it wouldn't be Visual Studio if it didn't have one or two things that aren't *quite* right and just nag the hell out of you.  It pops a heckuva lot of stuff up by hovering and while typing in code, to the point where sometimes it's hard to see the code.  But that can be dialed-back or deconfigured as desired.

We've found that the optimizer does a better job of making faster native machine code, which is important if you have compute-intensive code.

Be aware that VS2015 Community Edition tries to contact Azure servers and a few others, and while it doesn't REQUIRE you to be logged-on with a Microsoft account, it occasionally needs to refresh its "license".  The online chattiness can be muzzled in various ways, but I'm worried that upcoming updates will enforce cloud connectivity ever more until there's no way to use it without giving away the farm.

That being said, we've gotten value out of VS 2015 so far.

Bear in mind there are release candidates out already for the next major version, so maybe VS 2015 is reasonably safe to adopt.

-Noel

Link to comment
Share on other sites


I agree NoelC.

Use Win10, but accept that it's no consipracy to say it's the thin end of a wedge in MS's plans.

It's not going to get nicer to use, it's going to get harder to use if you want to have any semblance of privacy. Only a huge scandal/data leak will change things.

In the UK we're just about to get the new snooping law which makes all government departments able to access your web meta-data. Anyone using VPN/Tor/other systems (like the US laws coming along?!) to obfuscate their web use is seen as fair game for government attack to see what you're 'hiding'

Windows 10, Google, government via ISP.

All your internets/computing activity are belong to us!

I'm using Win10 just to learn more than I am to be happy with. Win7 will likely go on this laptop as soon as Win10 gets too much work.

Purely from a professional stance, I'm not at all happy that MS and government can lift so much data and then make it available to 3rd parties, or exposed to hacks.

I work with NDA material and competitive material. I pay professional insurance to cover all the legal side of things but when you move from a 'safe' Win7 config to the wild west Win10 you're a little uneasy if it'll come back to bite you down the line and you still somehow end up liable for agreeing to the MS TOS.

Without having an 'off' button for all the spyware, how can I ever guarantee anything with regards to data security?

As a 'small' business (freelancer) it certainly makes me feel like I'm never going to be able to be taken seriously when I'm using a leaky software operating system, and not having the 'big company' clout to have Enterprise versions and an IT chap making it secure (on paper) for me.

In Win7 I could guarantee safety (or liability was clear) just by having encrypted drives and a firewall with MWB scanning actively. With Win10 all that is for nothing if it's sending data around willy nilly through AWS and whoever else, without me knowing!

Edited by ProfessorUltraviolet
Link to comment
Share on other sites

Good, rational thoughts, ProfessorUltraviolet.

Regarding taking control of what your system is doing online, it's still possible even with Win 10 if you're willing to shun the cloud integration and the Apps.  But that's not a configuration Microsoft supports any more.

Some random suggestions:  Look into the upcoming version 8 release of the Sphinx Windows Firewall Control product.  That version is going to make it practical (dare I say easy?) to manage a deny-by-default firewall setup.  Address-based firewalls are simply impractical now, and this new version is going to do it by name!

Also consider implementing a DNS proxy server that can blacklist sites by wildcard.  That makes it possible to blacklist things such as...

vortex*
*vortex.data.microsoft.com
*vortex-win.data.microsoft.com
*settings-win.data.microsoft.com
*vo.msecnd.net
*telemetry*microsoft*
a-*.a-msedge.net
*smartscreen*microsoft*

...and many, many more.  In fact, my wildcarded blacklist has over 21,000 entries right now.  That does not include non-wildcarded entries, which number over 55,000.  It's a VERY effective way to just ignore the worst parts of the web. 

-Noel

Edited by NoelC
Link to comment
Share on other sites

12 hours ago, Dibya said:

I am upgrading to vs2015

my 8.1 can run vs2015

thanks friends i will put it.

You're welcome.  I run it under Win 8.1 as well.

I failed to mention above that there are extensions worth considering.  Those that I use include:

  • AnkhSVN, which integrates SubVersion access into Visual Studio.
  • Line Endings Unifier, which helps keep Unix vs. DOS line endings straight.
  • NoMorePanicSave, which saves all files that have been altered when Visual Studio loses focus.
  • Open Command Line, which adds as a sub feature Batch file code coloration.
  • VSColorOutput, which colorizes things in the Output Window (handy for spotting errors and warnings).
  • Windows Installer XML Toolset, which aids in making installers.

Of course, everyone has their own favorite set.

-Noel

Link to comment
Share on other sites

21 hours ago, NoelC said:

Good, rational thoughts, ProfessorUltraviolet.

Regarding taking control of what your system is doing online, it's still possible even with Win 10 if you're willing to shun the cloud integration and the Apps.  But that's not a configuration Microsoft supports any more.

Some random suggestions:  Look into the upcoming version 8 release of the Sphinx Windows Firewall Control product.  That version is going to make it practical (dare I say easy?) to manage a deny-by-default firewall setup.  Address-based firewalls are simply impractical now, and this new version is going to do it by name!

Also consider implementing a DNS proxy server that can blacklist sites by wildcard.  That makes it possible to blacklist things such as...


vortex*
*vortex.data.microsoft.com
*vortex-win.data.microsoft.com
*settings-win.data.microsoft.com
*vo.msecnd.net
*telemetry*microsoft*
a-*.a-msedge.net
*smartscreen*microsoft*

...and many, many more.  In fact, my wildcarded blacklist has over 21,000 entries right now.  That does not include non-wildcarded entries, which number over 55,000.  It's a VERY effective way to just ignore the worst parts of the web. 

-Noel

I'm not an expert on how DNS and connections work at the network level, but I'm guessing all networking is happening via IP and NAT (or ipv6/insidious ipv6 tunnels!!!), but everything these days requests domain names to get the right IP.

So by denying the IP lookup via DNS, you're blocking the connection?
 

Or are you essentially passing 0.0.0.0 or a 'null' IP to the DNS names you don't want things to get?

Is all that done silently and relatively 'lightly' on the networking overhead?

To me it still seems a bit risky letting Windows out on the WWW full stop.

I'll have to go read up on how DNS works. It'd be nice to run a local router with a DNS cache (from a trusted vanilla DNS provider), and then Windows only needs to see the routers DNS, and blacklist can be created at *that* point, before Windows even gets to see it.

I'm assuming then you can 'feed' Win10 only the DNS info it wants, because any other DNS requests are sent to null.

BUT, all it'd take is one hard IP address that Win10 could access via whatever sneaky protocol it wanted, to get an IP list avoiding DNS.

Ultimately we have to trust MS isn't going to subvert these age old network systems because if it does it can do whatever it likes.

Given how well VMs have improved recently (GPU related mainly), I'm increasingly tempted to just try seal Win7/10 up for work inside a VM on Linux.

Link to comment
Share on other sites

You have it essentially right:  Have the DNS provider your system relies upon return a "not found" response for sites you don't want visited, and voila, a site is essentially blacklisted. 

In my case I'm doing it with a dedicated DNS server package that supports local resolutions, and will return a "not found" response as a special case when a particular site is defined in a local list as 0.0.0.0.  The DNS server package I use scans its blacklists, and for sites not found in the lists forwards the request to an online DNS server (in my case, OpenDNS), then routes the response back to the requester all in a few milliseconds.  It's amazingly responsive.

Blacklisting can also be done with a specially crafted hosts file, though that doesn't provide wildcard capability.  There are some who say doing it with a big hosts file causes undue overhead to the system, but with the normal DNS caching subsystem left in place I've not seen it become an issue, even with 60,000+ entries in the file.  Of the two methods, even if you don't have a dedicated system to run it I'd suggest implementing a DNS proxy like Dual DHCP DNS Server (with my mods) and having the system contact that service on the same hardware to resolve DNS requests.

Going further...

If you want to take controlling network access to the next level, a deny-outgoing-connection-by-default firewall setup will make sure nothing gets through without your prior approval.  There is a product, Sphinx Windows Firewall Control, that's about to be released that embraces DNS name resolution as well, so you can manage your firewall configuration by name, not by IP address.  This makes a huge difference with regard to maintainability.  Once you have things set up initially then it becomes almost "set it and forget it".  Right now I haven't had to think about my firewall configuration on any system for weeks.  If software makes some new attempt to contact another system online, the firewall just pops up a notification that it was blocked, so what little maintenance remains is reactionary.

With all the above in place, you have gained the ability to see what DNS requests are being made, AND what connection requests are being refused or allowed by the firewall, and that can lead you further - to being able to find the tweaks / configuration options that will stop the requests from even being made to start with.

Just as an example, one might notice (from the DNS log and/or firewall log) that Windows is regularly trying to contact these servers:

spynet2.microsoft.com
spynetalt.microsoft.com

So besides blacklisting them and continuing to see the names resolved to "not found", one can research these sites and find that they're the telemetry portals for Windows Defender and the Microsoft Malicious Software Removal Tool.  That research will turn up the fact that there are registry settings you can change that will stop the system from trying to report to these servers to begin with.  So between the blacklisted name resolution, the reconfigured registry settings, and the firewall, the attempts to send information elsewhere are blocked multiple different ways.

Other results of having this information include Disabling some services (e.g., DiagTrack), disabling some scheduled tasks, finding various supported but sometimes not obvious settings, registry tweaks of course, etc.

Do all this for a while and you have a pretty private system that doesn't even try to spill the beans, and if it does try it fails.  With suitable configuration, things like certificate chain verifications still occur unhindered.  Best of all worlds.

I've done the above on Win 7, 8.1, and 10.  It's quite pleasing to see a Win 10 system not even trying to go online except for approved comms.

-Noel

Edited by NoelC
Link to comment
Share on other sites

So Microsoft copies Apple when they move toward removing user control...  Apple furthers the effort seeing that their approach is validated by the behemoth.  Google copies everyone, of course...

Maybe a fully open source Linux really is the future and To Hell with the big companies.  Time was it was good to partner with a big commercial entity who had at least SOME ideals aligned with your own.  Now where is that alignment?

It's becoming all about taking control from users because clearly We Know Best (how to fleece them).  Is that good for most folks?  Have so many folks just reached this stage that it's hopeless to try to THINK any more?

stupid-sheep.jpg

-Noel

Link to comment
Share on other sites

2 hours ago, NoelC said:

So Microsoft copies Apple when they move toward removing user control...  Apple furthers the effort seeing that their approach is validated by the behemoth.

And when you combine the two worlds, something must happen ... :w00t::ph34r::

https://www.reddit.com/r/apple/comments/5e1g37/warning_bootcamp_driver_causing_blown_speakers_in/

https://www.reddit.com/r/apple/comments/5e7whh/update_bootcamp_driver_causing_blown_speakers_in/

:whistle:

The day software can actually kill hardware has come. :(

jaclaz


 

Link to comment
Share on other sites

16 hours ago, jaclaz said:

The day software can actually kill hardware has come. :(

I think Microsoft may have led that charge with CPU overheating due to the "hard CPU loop during Checking For Updates" problem.

-Noel

Link to comment
Share on other sites

1 hour ago, NoelC said:

I think Microsoft may have led that charge with CPU overheating due to the "hard CPU loop during Checking For Updates" problem.

-Noel

Well, but that normally didn't botch the hardware, the temperature sensors (hardware) would have prevented that and shut off the machine. :unsure:

Here we are seemingly  talking of the speakers been blown for good.

(and we are not talking of el-cheapo external speakers that can be replaced in no time, we are talking of the internal speakers of a laptop that annot even be opened by "common mortals").

jaclaz
 

Link to comment
Share on other sites

What you describe as "normal" would better be called "optimal".  People pretty much everywhere it's been discussed have reported heat-related failures in their systems after errant Windows update CPU looping.  Do you think they're all lying?  I don't.

We're in agreement that software should not be given the power to actually destroy things if it is not also given serious engineering consideration to ensure that all reasonable measures have been taken to prevent such failure.  Microsoft has certainly put no effort into mitigating the hard CPU looping thing.  Maybe Apple just didn't think things through (or secretly thinks that if you run Windows on your MacBook you should be punished).

I suspect that software in modern times is ever more apt to just fail without warning or reason, because - frankly - building robustness into any system takes more thought than not doing it, and people for some reason seem to HATE thinking more now than ever.

It's not REALLY necessarily more expensive to good work, except perhaps that you have to pay well-educated engineers to get good engineering (opposed to sending the work to a 3rd world country to have it done as cheaply as possible).  Whomever thinks software development is a labor job and should be low-paid is an id***.  A freaking id***.  I know for a fact that some such people work at Microsoft.

Poor software work isn't faster to get done either.  I have in my career shown again and again that smart, well thought-through engineering where something is gotten right the first time is virtually always more efficient than starting with slipshod, incomplete programming done by sloppy workers who don't care - primarily because the problems caused are not insignificant, and lead to much rework by someone like me.  I've been saving poorly done projects from abysmal failure my entire career.  With lousy, rushed software, support costs to go up and future business to go down.  There's an old saying, "There's never enough time given to do it right, but there always seems to be time to do it over."  I was the adept troubleshooter that did it over for over 30 years, until I started my own company.

-Noel

Link to comment
Share on other sites

8 minutes ago, NoelC said:

Poor software work isn't faster to get done either.  I have in my career shown again and again that smart, well thought-through engineering where something is gotten right the first time is virtually always more efficient than starting with slipshod, incomplete programming done by sloppy workers who don't care - primarily because the problems caused are not insignificant, and lead to much rework by someone like me.  I've been saving poorly done projects from abysmal failure my entire career.  With lousy, rushed software, support costs to go up and future business to go down.  There's an old saying, "There's never enough time given to do it right, but there always seems to be time to do it over."  I was the adept troubleshooter that did it over for over 30 years, until I started my own company.

Yep :), also sometimes removing a safety fuse (a single §@ç#ing fuse worth - say - 0.10 USD) combined with (bad) software engineers can have serious consequences, a re-known case, JFYI:

http://hackaday.com/2015/10/26/killed-by-a-machine-the-therac-25/

Quote


By performing [Fritz’s] procedure on his older machine, he received similar error, and a fuse in the machine would blow. The fuse was part of a hardware interlock which had been removed in the Therac-25.

jaclaz
 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...