Wikipedia:Reference desk/Archives/Computing/2012 February 22
Computing desk | ||
---|---|---|
< February 21 | << Jan | February | Mar >> | February 23 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
February 22
[edit]Does partitioning a disk erase data
[edit]I have a Western Digital external hard drive with some data on it. It is currently FAT formatted so it cannot hold files larger than 4 GB and it has data on it that I want to keep. I want to add an HFS+ partition but I want to keep the data I already have on it which I cannot back up to my computer because I don't have enough space. If I add a partition to the disk, will it erase/overwrite all the data I have on it? --Melab±1 ☎ 02:34, 22 February 2012 (UTC)
- If your drive's current partition has used up all of the available space, you would have to resize the partition to free up unpartitioned space for the new partition. Resizing can be done safely but it is an activity that is fraught with danger so a backup is always recommended. You can use something like gparted (use a linux boot disk for this) which I've found to be the most reliable utility for partitioning. If you already have unpartitioned space then you should be able to add the new HFS partition safely... but if it were my data I would find a means to back it up, even if you have to borrow a USB stick for a day. Sandman30s (talk) 08:12, 22 February 2012 (UTC)
- ... or if you have lots of valuable data on the drive (maybe enough to fill a dozen USB flash memory devices?), borrow another external hard drive with plenty of empty space, or another computer or laptop with a large hard drive to backup your data, or use one of the internet backup facilities. If your data is valuable, it would be wise to have at least one separate backup somewhere else as a matter of policy because hard drives can fail unexpectedly. Dbfirs 09:09, 22 February 2012 (UTC)
- Yes, backup your files. I know that first-hand: I lost all my files once when a repartitioning went wrong (the computer froze). I had to reformat the entire drive. -- Luk talk 09:37, 22 February 2012 (UTC)
- There's no reason why resizing a partition should be a dangerous operation. It's not fundamentally different from any other file system operation. The only reason to be less trustful of resizing is that it's less common than other filesystem operations, so it's more likely that the code has undetected bugs. But gparted is pretty widely used and reliable as far as I know. Or you could use Windows Vista or later to shrink the partition. That functionality is built into the file system driver, and the file relocation part of it is used by millions of people daily, since their machines are configured to defragment in the background. -- BenRG (talk) 21:56, 22 February 2012 (UTC)
- I respectfully beg to differ. Partitioning operations are not operating system file operations, they are low-level storage operations. Like Luk, I had a mishap with Partition Magic (tragic!) where the software froze in the middle of a resize operation and it was hell trying to recover my data thereafter. gparted works like a charm but there is still no guarantee of success. It's also possible that it's a knowledge factor and you choose the wrong option in gparted, what with physical/logical/extensions/etc. Sandman30s (talk) 07:52, 23 February 2012 (UTC)
- Ordinary file system operations are "low level" from the perspective of, say, saving a word processing document. This has nothing to do with reliability. To the extent that any of these things are safe, it's only because the implementation is well tested. Linux for a long time had only experimental NTFS write support which was certainly not safe even though it was "only" writing files. Now it has reliable NTFS support. I think partition resizing has also come of age. We're not stuck with Partition Magic any more, thankfully.
- I respectfully beg to differ. Partitioning operations are not operating system file operations, they are low-level storage operations. Like Luk, I had a mishap with Partition Magic (tragic!) where the software froze in the middle of a resize operation and it was hell trying to recover my data thereafter. gparted works like a charm but there is still no guarantee of success. It's also possible that it's a knowledge factor and you choose the wrong option in gparted, what with physical/logical/extensions/etc. Sandman30s (talk) 07:52, 23 February 2012 (UTC)
- There's no reason why resizing a partition should be a dangerous operation. It's not fundamentally different from any other file system operation. The only reason to be less trustful of resizing is that it's less common than other filesystem operations, so it's more likely that the code has undetected bugs. But gparted is pretty widely used and reliable as far as I know. Or you could use Windows Vista or later to shrink the partition. That functionality is built into the file system driver, and the file relocation part of it is used by millions of people daily, since their machines are configured to defragment in the background. -- BenRG (talk) 21:56, 22 February 2012 (UTC)
- It is always good to have a backup—the operative word being always. -- BenRG (talk) 20:24, 23 February 2012 (UTC)
- The OP wants to repartition smaller. I don't think a repartition moves data first, so wouldn't data that is on the disk past the size of the new partition be lost? I'd certainly make a backup first. Bubba73 You talkin' to me? 22:06, 22 February 2012 (UTC)
- You're right about what I want to do. Currently, the external drive consists of a single FAT32 partition. My computer has about 3 GB of space left so now I now want to free up some more by moving some files—that I don't access as often as I used to—over to the external drive, but they are larger than 4 GB (they are probably mostly virtual hard drives that I have set up for VMware and VirtualBox). I want to add an HFS+ partition to the external drive to handle these files. It would not surprise me if some defragmenting and data consolidation would be necessary to do that. --Melab±1 ☎ 22:40, 22 February 2012 (UTC)
- Storage is cheap these days. If you're not living off ramen noodles every night, just buy an external backup. You shouldn't have to pay more than $100 for a 1 TB 3.5" bare drive or a 500 GB 2.5" bare drive. Then buy an enclosure for it and put it together yourself; it isn't hard (but make sure you get the right interface). I recommend the 2.5" drives even though they're not quite as capacious, because they're a lot more convenient — you don't need a power source for the enclosure; it will run off the power from the USB connection. Better yet, get two and back the data up twice. Then you can wipe the original drive and repartition it however you like, and copy the data back — that's quick-and-dirty, bulletproof resizing and defragmenting mixed into one, and then you can stick one of the drives in a safe deposit box, or lock it in your desk at work, in case some disaster should happen at home. --Trovatore (talk) 08:35, 23 February 2012 (UTC)
- (Oh, assuming your operating system is not on there of course — copying the data back might not get the operating system right.) --Trovatore (talk) 08:42, 23 February 2012 (UTC)
- Storage is cheap these days. If you're not living off ramen noodles every night, just buy an external backup. You shouldn't have to pay more than $100 for a 1 TB 3.5" bare drive or a 500 GB 2.5" bare drive. Then buy an enclosure for it and put it together yourself; it isn't hard (but make sure you get the right interface). I recommend the 2.5" drives even though they're not quite as capacious, because they're a lot more convenient — you don't need a power source for the enclosure; it will run off the power from the USB connection. Better yet, get two and back the data up twice. Then you can wipe the original drive and repartition it however you like, and copy the data back — that's quick-and-dirty, bulletproof resizing and defragmenting mixed into one, and then you can stick one of the drives in a safe deposit box, or lock it in your desk at work, in case some disaster should happen at home. --Trovatore (talk) 08:35, 23 February 2012 (UTC)
- You're right about what I want to do. Currently, the external drive consists of a single FAT32 partition. My computer has about 3 GB of space left so now I now want to free up some more by moving some files—that I don't access as often as I used to—over to the external drive, but they are larger than 4 GB (they are probably mostly virtual hard drives that I have set up for VMware and VirtualBox). I want to add an HFS+ partition to the external drive to handle these files. It would not surprise me if some defragmenting and data consolidation would be necessary to do that. --Melab±1 ☎ 22:40, 22 February 2012 (UTC)
- VMware definitely, and VirtualBox probably, supports splitting large disk images across files of 4GB or smaller. You could work around the problem that way (create a new disk image on the external drive, mount them both, clone the virtual partition, delete the single-file image). Or, if you don't plan to use the files directly from the external drive, you could stick them inside a compressed archive format that supports multi-volume archives. -- BenRG (talk) 20:24, 23 February 2012 (UTC)
- VMware does but it gave an error dialogue saying there was not enough space. --Melab±1 ☎ 03:54, 24 February 2012 (UTC)
Exactly how did Wikipedia get so popular?
[edit]So obviously, Wikipedia is one of the most popular websites on the planet. But I'm wondering, exactly how did it become popular? What was the actual trigger for Wikipedia to go from obscurity to fame? Around 2004, I was already seeing stuff about Wikipedia, and it was only around 3 years old! So how did Wikipedia become popular anyway? Was it endorsed by someone famous, was it featured in a magazine or newspaper, or was it through word-of-mouth? Narutolovehinata5 tccsdnew 02:43, 22 February 2012 (UTC)
- All of the above. It wasn't an overnight sensation. See History_of_Wikipedia#2001 though if you want one definitive year that put it on the map — news stories, geek sites, and so on. But it still took another two years to reach massive penetration into the mainstream. Its growth followed a roughly exponential function in the first years — such functions take awhile to build, but quickly get gigantic. By 2005 or so even my grandmother had heard of it. --Mr.98 (talk) 03:03, 22 February 2012 (UTC)
- Also, Google likes wikipedia, which helped. Robinh (talk) 03:22, 22 February 2012 (UTC)
- Yeah, being at the top of almost every Google result certainly helped. Google probably likes us because we link article with similar concepts. SEO done right :P -- Luk talk 09:38, 22 February 2012 (UTC)
- I think that a big factor is that people would write cross-reference-style links to Wikipedia articles in their prose on other websites, as a way of providing background without breaking flow. This makes it look like everyone thinks that Wikipedia is relevant to what they're talking about (which, to be fair, is pretty much correct). So, with PageRank-like systems, being a canonical reference resource is just about perfect SEO. Paul (Stansifer) 19:51, 22 February 2012 (UTC)
- Yeah, being at the top of almost every Google result certainly helped. Google probably likes us because we link article with similar concepts. SEO done right :P -- Luk talk 09:38, 22 February 2012 (UTC)
- Also, Google likes wikipedia, which helped. Robinh (talk) 03:22, 22 February 2012 (UTC)
It was the first wiki of its kind, AFAIK. ¦ Reisio (talk) 03:28, 22 February 2012 (UTC)
this (switching HDMI cables)
[edit]Is it okay to switch an hdmi cable (plug-unplug)between two devices while the devices are still connected to the mains, i mean theyre turned off but the plugs are still in the sockets — Preceding unsigned comment added by 77.35.11.229 (talk) 05:36, 22 February 2012 (UTC)
- I added to your title to make it useful. StuRat (talk) 05:41, 22 February 2012 (UTC)
- I think it's fine as those cables carry digital/optical signals and not mains power. It's even better that your devices are turned off, but it's not necessary. It's very much the same as, for example, plugging the RCI cables of your camera into your TV... you don't have to turn either device off. I've been swapping HDMI cables for years with no apparent ill effect. Sandman30s (talk) 07:57, 22 February 2012 (UTC)
Yep. I move my single HDMI cable between my Xbox 360 and my PS3 regularly. Neither console can be properly turned off easily, just be put into standby so I have to do this. 192.84.79.2 (talk) 09:21, 23 February 2012 (UTC)
Annoying Windows behaviour
[edit]In my Windows XP, SP3, if I try to drag a window by it's title bar, and accidentally release and reclick on it while dragging, it maximizes the window. Can this feature be disabled ? StuRat (talk) 06:06, 22 February 2012 (UTC)
- Close, but no cigar. That's how to disable the Aero Snap feature in Windows 7. StuRat (talk) 06:20, 22 February 2012 (UTC)
- Oh. My mistake. I forgot you said Windows XP. Honestly I cannot recreate that behavior on my system, because you have to stop dragging in order to click. I know you can double-click on a title bar to maximize it, but I've never heard of a window maximizing by single-clicking (unless you click on the maximize button, of course).--Best Dog Ever (talk) 06:24, 22 February 2012 (UTC)
- Maybe I do double click. My mouse button can do that when I try to single click. How can I disable "maximize on double click" ? StuRat (talk) 06:54, 22 February 2012 (UTC)
- Probably not, although buying a new mouse is probably best before you double click on something harmful by mistake. Mouses are quite cheap, now... -- Luk talk 09:40, 22 February 2012 (UTC)
- It seems not to be natively possible in Windows, but I think you can do it with Window Blinds. It costs, but you get a free trial. There might be a free alternative that does it as well, but I can't think of one right now. - Cucumber Mike (talk) 11:05, 22 February 2012 (UTC)
- Probably not, although buying a new mouse is probably best before you double click on something harmful by mistake. Mouses are quite cheap, now... -- Luk talk 09:40, 22 February 2012 (UTC)
- Maybe I do double click. My mouse button can do that when I try to single click. How can I disable "maximize on double click" ? StuRat (talk) 06:54, 22 February 2012 (UTC)
- You can adjusst the speed/sensitivity of double-click in the Control Panel / Mouse settings. Non-intuitively (to me), you would move the slider to "fast". --LarryMac | Talk 16:18, 22 February 2012 (UTC)
Perl - getting the value of the ones place
[edit]If a user is asked for a number, I want to be able to tell what the ones place contains in Perl. So if they gave a response of 15, I want to know that the ones place value is 5. How can I do that without regular expressions? I can get things like the tens place by dividing the number by 10 but how do I do that with the ones place? I thought it would have something to do with modulo but now I'm not sure. Dismas|(talk) 09:32, 22 February 2012 (UTC)
- I can't remember the perl syntax for modulo, but it's usually something involving the percent sign: so 15 % 10 would return 5. Yes, here it is: http://en.wikibooks.org/wiki/Perl_Programming/Operators Tinfoilcat (talk) 09:37, 22 February 2012 (UTC)
- Yep, got it. I was just coming back to say that I figured out the same thing. Thanks though!! Dismas|(talk) 09:53, 22 February 2012 (UTC)
- Make sure that you're using the absolute value if the number could be negative. I don't know how Perl's mod operator works, but until you check, opposite it before remaindering. KyuubiSeal (talk) 01:59, 23 February 2012 (UTC)
- Building in error checking isn't part of the assignment. But thanks for the suggestion! Dismas|(talk) 15:09, 23 February 2012 (UTC)
- I know it's resolved... but you could also use substr. Shadowjams (talk) 19:48, 23 February 2012 (UTC)
computer
[edit]what is the importance of the formula bar in ms excel — Preceding unsigned comment added by 223.176.108.249 (talk) 15:58, 22 February 2012 (UTC)
- It displays the contents of the cell which has focus. It serves as an area where the contents of the cell - whether data or formula - can be edited. And it provides dialogue boxes which assist in putting together formulii when you press the fx characters to the left of the formula box (at least, in Excel 2003). --Tagishsimon (talk) 17:19, 22 February 2012 (UTC)
- Without it, Excel would be of little use except as the "dumb paperlike inventory/database sheet you have to update by hand" that most people use it as. 188.6.76.0 (talk) 18:11, 22 February 2012 (UTC)
- Not really, as the user interface allows users to enter data and formulii into cells; the formula bar is akin to a second window on the focused cell and as such redundant. --Tagishsimon (talk) 20:11, 22 February 2012 (UTC)
- Point taken. Still, many would be surprised that you can use a cell for anything other than entering a literal or maybe coloring its foreground or background, or changing the font/bold/italic. — Preceding unsigned comment added by 80.99.254.208 (talk) 10:10, 23 February 2012 (UTC)
- Not really, as the user interface allows users to enter data and formulii into cells; the formula bar is akin to a second window on the focused cell and as such redundant. --Tagishsimon (talk) 20:11, 22 February 2012 (UTC)
Google and privacy
[edit]Googlebot crawls any given page many times. What happens with the old versions? They certainly don't seem to be made available in public, but are they stored on Google's servers? Is this known at all? What do they officially say? It's bad enough that WayBackmachine's bots archive pages unless you tell it actively not to, but the thought that Google might at some point reveal historical versions of webpages makes my skin crawl. — Preceding unsigned comment added by Xcvxvbxcdxcvbd (talk • contribs) 18:33, 22 February 2012 (UTC)
- I don't know how long Google keeps old data. You can control what Google and other web bots scan by using a robots.txt file. RudolfRed (talk) 18:37, 22 February 2012 (UTC)
- Bots that adhere to that formality, anyways. ¦ Reisio (talk) 18:54, 22 February 2012 (UTC)
- Hardly. You can just turn it off/on, and that's just for data from the point of whenever you figure that out. — Preceding unsigned comment added by Xcvxvbxcdxcvbd (talk • contribs) 18:53, 22 February 2012 (UTC)
- (EC with below) I suggest you read the article. Robots.txt allows you to control what parts of a site bots are allowed to visit. You can control by bot, or just ban them all. If you don't want to ban bots from crawling your site but do want to stop them making publicly available copies/archives, while this isn't supported by robots.txt, you can use robots meta tags as I mentioned here Wikipedia:Reference desk/Archives/Computing/2011 November 25 and explained here [2]. This is respected by most of the major bots (including webcitation) although not apparently the archive.org bot [3]. Archive.org will remove historic archived content on a page if they find a new robots.txt disallowing it [4] although obviously it won't happen until they check your robots.txt again and perhaps allowing for some processing time. (Well it's not clear to me if they delete the content, or just make it unavailable.) It wouldn't surprise me if Google etc to likewise if they ever decide to make older copies of pages available. Of course, if you'r so paranoid about content being archived, your best bet is probably be to disallow all bots from your entire website and check your logs regularly to look for bots not respecting robots.txt although even that is far from a guarantee. However I somewhat agree with Reisio here, if you are sticking info you want to keep private onto the publicly accessible web, and expect trying to stop well documents bots will be enough to guarantee it stays private, you have another thing coming. In fact, since it's along time since 1994, if you weren't aware of bots and how to control them, you probably shouldn't have been administrating a website anyway. Nil Einne (talk) 20:15, 22 February 2012 (UTC)
- 1994? What?
- robots.txt has existed since 1994. Yahoo and Lycos ditto and AltaVista was established in 1995. Internet Archive in 1996. Google and MSN Search in 1998. It was understandable in 1994 for someone to be unfamiliar with common bots and the well documented ways to control them, but it's now 2012 and anyone not aware probably shouldn't be adminstrating a website. From some of the OPs other questions, perhaps it's not surprising they're trying to hide stuff they've done. Nil Einne (talk) 22:48, 22 February 2012 (UTC)
- Providing a robots.txt file does not enforce any data protection. It's the technical equivalent of asking "please," and hoping that the robot plays nice. If you want to deny access, you should use a securely encrypted authentication mechanism, and deny delivery of content to any un-authenticated party. This is actually made clear in the robots.txt article. I would go so far as to say "robots.txt is worse than useless." It merely provides a web administrator a false sense of security, without actually protecting the data in any way. Nimur (talk) 23:07, 22 February 2012 (UTC)
- Well I already acknowledged that it isn't going to guarantee the data is protected in my first post (although perhaps I wasn't clear enough that trying to manually ban all bots which ignore robots.txt is still going to miss a lot of stuff). However, the fact remains the OP is complaining about Google, Internet Archives and bots for which a properly configured robots.txt will pretty much guarantee the bots will not index the site. This doesn't help you with bots that don't respect the standard, or people who are not even expected to, which as I've said, I acknowledged in my first post. But it does mean it doesn't make sense to be complaining about bots which do respect the standard just because you were too ill-informed to be aware of it and know how to use it. In other words what I've been saying all along is someone administrating a site should be aware of the use of robots.txt, including it's limitations but also where it does work. It's apparent the OP is not aware of this at all which suggests they should not be administrating a site. (I case it wasn't obvious, I was also suggesting the OP was worrying about the wrong thing if they are worrying about Google, Internet archive etc.)
- Note that I would disagree that robots.txt is useless. It's quite likely some people may wish to stop public, well known sites indexing their content which can easily be done with robots.txt (since as I said, they do by and large respect it), while being fully aware this doesn't mean no one is going to index or keep copies of their content since there's a fair chance many people still will. The fact that some people don't understand the limitations of robots.txt doesn't make it useless, it simply means people probably shouldn't administer a site if it matters to them and they aren't familiar with such details. (The same way of course people not even aware of robots.txt or how to use it shouldn't be.)
- Nil Einne (talk) 13:31, 23 February 2012 (UTC)
- Providing a robots.txt file does not enforce any data protection. It's the technical equivalent of asking "please," and hoping that the robot plays nice. If you want to deny access, you should use a securely encrypted authentication mechanism, and deny delivery of content to any un-authenticated party. This is actually made clear in the robots.txt article. I would go so far as to say "robots.txt is worse than useless." It merely provides a web administrator a false sense of security, without actually protecting the data in any way. Nimur (talk) 23:07, 22 February 2012 (UTC)
- robots.txt has existed since 1994. Yahoo and Lycos ditto and AltaVista was established in 1995. Internet Archive in 1996. Google and MSN Search in 1998. It was understandable in 1994 for someone to be unfamiliar with common bots and the well documented ways to control them, but it's now 2012 and anyone not aware probably shouldn't be adminstrating a website. From some of the OPs other questions, perhaps it's not surprising they're trying to hide stuff they've done. Nil Einne (talk) 22:48, 22 February 2012 (UTC)
- 1994? What?
- (EC with below) I suggest you read the article. Robots.txt allows you to control what parts of a site bots are allowed to visit. You can control by bot, or just ban them all. If you don't want to ban bots from crawling your site but do want to stop them making publicly available copies/archives, while this isn't supported by robots.txt, you can use robots meta tags as I mentioned here Wikipedia:Reference desk/Archives/Computing/2011 November 25 and explained here [2]. This is respected by most of the major bots (including webcitation) although not apparently the archive.org bot [3]. Archive.org will remove historic archived content on a page if they find a new robots.txt disallowing it [4] although obviously it won't happen until they check your robots.txt again and perhaps allowing for some processing time. (Well it's not clear to me if they delete the content, or just make it unavailable.) It wouldn't surprise me if Google etc to likewise if they ever decide to make older copies of pages available. Of course, if you'r so paranoid about content being archived, your best bet is probably be to disallow all bots from your entire website and check your logs regularly to look for bots not respecting robots.txt although even that is far from a guarantee. However I somewhat agree with Reisio here, if you are sticking info you want to keep private onto the publicly accessible web, and expect trying to stop well documents bots will be enough to guarantee it stays private, you have another thing coming. In fact, since it's along time since 1994, if you weren't aware of bots and how to control them, you probably shouldn't have been administrating a website anyway. Nil Einne (talk) 20:15, 22 February 2012 (UTC)
- Controlling data expiration is difficult, because each subsequent processor of information has to adhere to the expiration policy. For example, what if some linguist is doing some sort of analysis on n-grams collected from crawling the web: are they obligated to expire their corpus some period of time? Or even the results of the analysis? Are people obligated to forget things they read on the Web after the content expires? What it means to crawl or not crawl is well-defined. Data expiration is not well-defined, unless you use a definition that is far too strict to follow. Publicization of information is practically irreversible. Paul (Stansifer) 20:01, 22 February 2012 (UTC)
Considering googles massive resources it would be surprising if they weren't archiving old cache data. And even if you somehow stopped all crawlers, got all internet preservation sites to delete their archives, there's nothing stopping a normal user from saving a copy themselves and uploading it somewhere. If there is something you don't want archived, don't put it on the internet to begin with. 82.45.62.107 (talk) 23:04, 22 February 2012 (UTC)
- It's extremely tiresome to hear this constantly. OBVIOUSLY this wouldn't be a problem if nobody posted sensitive/badly thought through content, but they DO, and not seldom about or involving others. — Preceding unsigned comment added by Xcvxvbxcdxcvbd (talk • contribs) 01:11, 23 February 2012 (UTC)
- Google, of course, does not care about this last observation, and will only care about it if there's legislation somewhere allowing the authors or hosters of content to order search engines, archives, and, I suppose, all other users to delete a given piece of content, and mandating that these orders be followed. Comet Tuttle (talk) 04:38, 23 February 2012 (UTC)
- Your comment doesn't seem relevant to your questions. You were complaining about Internet archive and Google indexing and preserving content. And the fact you have to learn how to control them from indexing a site where you have sufficient control to place a robots.txt. This is a moot point if you are concerned about other people posting content I presume on other sites where you don't have such control. If you don't have control over the site, then worrying about other sites indexing or archiving the content is pretty silly since you have no way of stopping the actual person in charge of the site preserving it or even spreading it around, forever. And since it's there site, it's pretty much their choice who they want to index and archive it, presuming the material itself doesn't violate any laws. (In other words, why worry about archives of historic content kept by other sites when the site itself keeps complete archives.) If you are worried about comments posted on a site you control, then the logical solution is requiring moderation before public visibility. (Of course if you don't know about robots.txt then you may not know enough to stop scriptkiddies breaking your site so this may not guarantee content isn't going to appear on your site when you don't want it.) Nil Einne (talk) 13:31, 23 February 2012 (UTC)