Jump to content

Wikipedia:Reference desk/Archives/Computing/2019 September 2

From Wikipedia, the free encyclopedia
Computing desk
< September 1 << Aug | September | Oct >> September 3 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 2

[edit]

Vandalized article.

[edit]

https://wiki.riteme.site/wiki/List_of_countries_by_firearm-related_death_rate

The article above has been purposely edited for/ to include incorrect information....it shows that the rate of gun homicides in the USA is lower than in Great Britain.....that's insane. I noticed that it has been edited twice in the last 2 days...

I am a senior ,..not a computer wizard,..and I do not know how to do it......please help..... — Preceding unsigned comment added by 50.71.150.46 (talk) 03:58, 2 September 2019 (UTC)[reply]

I have changed UK back to what it was before, but now US has two entries! Graeme Bartlett (talk) 04:13, 2 September 2019 (UTC)[reply]
I reverted edits to an earlier version to remove the duplicate US entry. The data from remaining entry matches the referenced source. -- Tom N talk/contrib 05:36, 2 September 2019 (UTC)[reply]

Chrome not refreshing page when first loaded

[edit]

Version 76.0.3809.132 (Official Build) (64-bit) on Windows 7, 64-bit.

I go to a web page and it displays the page as it appeared the last time I visited, presumably from the cache. Doing a page refresh does update it, but I'd prefer not to have to do that. Obviously I can empty the cache, but that's a one-time fix. Actually, if possible, I'd like it to ask which version to display, since there are times when the old version is preferred (such as when I lost my connection to the Internet and would rather see the old version of the web page than an error message). Is this possible under Chrome ? SinisterLefty (talk) 13:25, 2 September 2019 (UTC)[reply]

Using a PowerShell script

[edit]

I should state up front that, apart from rudimentary SQL, I am not a scripting guy. What I want to do is to obtain a database filled with the metadata of my music files so that I can play around with it in Excel. While the DIR command works well for generating simple lists with file sizes, it doesn't grab much else, and searching online has led me to this and thence to this. I've installed Powershell and executed a simple "Hello World" type thing, so I'm pretty sure everything is okay as far as the program goes.

However, I seem to now be stuck and I feel like I'm missing something really, really obvious. Reading the notes here, it seems like all I need to do is choose one of the examples provided, update the path with what exists on my local PC, and execute the PS1. However, every time I do that, Powershell spits out a large number of error notes. For ease of use, here is the active part of the script from the site:

Powershell script
Param([string[]]$folder) 
foreach($sFolder in $folder) 
 { 
  $a = 0 
  $objShell = New-Object -ComObject Shell.Application 
  $objFolder = $objShell.namespace($sFolder) 

  foreach ($File in $objFolder.items()) 
   {  
    $FileMetaData = New-Object PSOBJECT 
     for ($a ; $a  -le 266; $a++) 
      {  
        if($objFolder.getDetailsOf($File, $a)) 
          { 
            $hash += @{$($objFolder.getDetailsOf($objFolder.items, $a))  = 
                  $($objFolder.getDetailsOf($File, $a)) } 
           $FileMetaData | Add-Member $hash 
           $hash.clear()  
          } #end if 
      } #end for  
    $a=0 
    $FileMetaData 
   } #end foreach $file 
 } #end foreach $sfolder 

} #end Get-FileMetaData

Let's try the first, easiest, example. I thought that all that was required was to alter the material after "Param", like this: Param([Get-FileMetaData -"D:\Music\ZZ Top - Greatest Hits" []]$folder) but that's apparently incorrect. I've tried a couple of other iterations, but I keep getting a laundry list of error messages. Any help or hints appreciated. Matt Deres (talk) 21:27, 2 September 2019 (UTC)[reply]

I do apologize if this is a stupid question; but why write a script when you can simply use something off-the-shelf? If your goal isn't to learn PowerShell I would recommend installing MediaMonkey, letting it index your collection, and then export it straight to Excel (if you Google "mediamonkey export" you get an explanation). Poveglia (talk) 21:43, 2 September 2019 (UTC)[reply]
Well, the short version is that I didn't know that was an option. Searching online for ways to obtain metadata led me to where I am; Media Monkey never showed up (or else I didn't see it). Since there's little to lose by giving it a try I'll do just that (thank you!), but now that I have a toe in, I'd like to figure out what the hell I'm doing wrong in PS. Matt Deres (talk) 00:01, 3 September 2019 (UTC)[reply]
With MediaMonkey you can export your collection to Excel's .xls (and .csv, Comma Separated Values, which can also be used in Excel) and you can also clean it up a bit (you can easily change multiple ID3 tags in multiple files at once). File > Create Report > File List. MediaMonkey uses SQLite btw; your SQL knowledge might come in handy. There are addons to further modify the reports.[1] It supports all filetypes I care about and more and it is freemium but I don't need the premium features. I am not a PowerShell expert, sorry, and I imagine it would be quite a long script if you want to support more than just the ID3 tags. Poveglia (talk) 01:15, 3 September 2019 (UTC)[reply]
Thanks, Media Monkey is working well. Matt Deres (talk) 15:59, 8 September 2019 (UTC)[reply]

Challenges of multi-socket CPU?

[edit]

Lots of press lately about the recent dual socket AMD Epyc server CPU beating a 4-socket Intel system at Geekbench.[2] There are also 8-socket Intel systems with even more cores (and higher cost) than the 4-socket ones. I can understand that one eventually hits a limit because of the communication mesh between sockets, but observably, Intel got 8 sockets to work. AMD also used to make 4 socket (don't know about 8 socket) Opteron servers.

The obvious question is why did they stop at 2 sockets with Epyc? Are there particular technical challenges that make it hard for them to do 4 or 8 socket Epyc servers? This stuff is too expensive for me (or alternatively still too weak so I'd scale to networked clusters), so I'm just wondering. Thanks. 67.164.113.165 (talk) 22:43, 2 September 2019 (UTC)[reply]

Let's do an analogy with the number of cylinders in a car engine. You can put as many in as you want (there have been cars with at least 16-cylinder engines). But there's a diminishing return on adding more cylinders, and eventually the increased complexity/number of parts costs more than it's worth. However, different car companies have put that maximum reasonable number at different points. For example, you can get a lot of power out of even a 4 cylinder engine, if you use high compression ratios and a supercharger, and pair it with a transmission with the right gear ratios. Or you can just go with lots of cylinders. SinisterLefty (talk) 23:43, 2 September 2019 (UTC)[reply]
See here Poveglia (talk) 04:27, 3 September 2019 (UTC)[reply]
It's perhaps worth remembering that many (all?) Epyc CPUs are MCM with multiple CPU chiplets. So you already have to deal with CPU interconnects on a single socket. Adding inter-socket interconnects adds even more complexity [3]. While it's true that Intel do have 4P and 8P systems, AFAIK even they haven't yet bothered to try 4P or more with their MCM CPU chiplet designs [4] [5] Nil Einne (talk) 12:37, 3 September 2019 (UTC)[reply]
It's also worth remembering that historically large computers had many processors (up to 2048) operating as a single machine of a single system image. The Altix range for example supports 2048 CPUs running a standard Linux (SLES) kernel. However, as Poveglia imples, these are not cheap machines! They also need specially designed parallel programming to fully utilise them. Martin of Sheffield (talk) 10:29, 4 September 2019 (UTC)[reply]
I should perhaps clarify that even when your CPU is monolithic, you probably still have to consider the interconnect between cores on that CPU. Still AFAIK that tends to be a simpler with monolithic CPUs and maybe more importantly at least in my eyes it's easy to see how things get more complicated when your single socket CPU is MCM with multiple chiplets and not so far off a multiple socket design. Anyway I maybe should also mention that even if 2P or 8P is the maximum supported by a stock design, this may not mean it's impossible to make a higher socket count system. See for example this [6] which seems to be a 32 socket Intel design with 896 cores and 1792 threads running some variety of Windows (it's a Microsoft system). You can see the Task Manager with a long scrolling list of logical processors all of which seem to be about 99-100% utilisation. I suspect the main purpose of this system was to develop and test the version of Windows they're using on it although I believe Azure is a big part of Microsoft's business nowadays. I imagine this is somewhat easier for Intel systems designed for 8P, but I wouldn't be surprised if there is someone with a Epyc system with more than 2 sockets. Note that source mentions that as core counts increase, even 4 and 8 socket systems are getting rarer. Nil Einne (talk) 10:30, 5 September 2019 (UTC)[reply]
Mentioned in the comments is that one manufacturer of such systems is HPE in particular the HPE Superdome Flex [7]. Also I realised I maybe didn't make clear, but as mentioned in the article and in the comments, when you have such systems while they may nominally be one system, you need to consider carefully the latency etc between CPUs, memory etc on different parts of the system in you software. I mean you already get this with even 1P MCM systems, but it generally gets even more extreme with a design like the HPE Superdome or whatever the Microsoft one is if it isn't that. And this is likely one reason why many find it better to just have multiple different systems and design your software accordingly. Nil Einne (talk) 10:58, 5 September 2019 (UTC)[reply]
On big machines moving pages to localise memory accesses may well be done for you by the system. You can develop parallel software for clusters using techniques such as MPI, but in general direct memory mapping between processes will be quicker than networking, even with Infiniband. Keywords on this response are HPC and BIG which implies expensive! Martin of Sheffield (talk) 10:38, 6 September 2019 (UTC)[reply]