Jump to content

Wikipedia:Reference desk/Archives/Computing/2010 August 16

From Wikipedia, the free encyclopedia
Computing desk
< August 15 << Jul | August | Sep >> August 17 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 16

[edit]

How much information on web browsing habits is accessible from the main node in a home network?

[edit]

Without going into too much detail on my particular situation(it's complicated), let's just say that the administrator of my network I use at home has become extremely untrustworthy and has made some threats to me, and I suspect they may attempt blackmail if they can obtain any sensitive or embarrassing information. Setting up a second network is not an option at this time, nor is simply avoiding the network altogether, as I'm a student and have work that needs done over the internet while I'm not at the university(where I have to use an insecure wireless network anyway, so it's not suitable for many non-school tasks). And I know what everybody will assume here, but I am a legal adult and the administrator in question is not my parent, so I'm not attempting to subvert any authority with my actions.

My question is, how much information can they intercept or access from the computer that controls the router/wireless access point? I'm pretty sure they can intercept any packets I send, unless I encrypt them, but would they be able to access any lists of web sites I've viewed? I'm not talking about the internet history(though I do clear that), but is there a list of urls(or server addresses) accessed by my computer saved on the router that they might access? The router in question is an airport extreme, though I'm not certain of any exact model numbers as I didn't set it up. I've done what I can to minimize the possibility of most of the threats made, but I don't know enough about the router options to judge if this one holds any water.

Also, are there any security precautions I should take? I don't run as the root/admin user as a rule, and I've physically unplugged my desktop pc from the network except when I (rarely) need to connect to the internet. My laptop uses the wireless connection though, and I'm not sure of anything else I can do to make it more secure, apart from enabling encryption where I can.

69.243.51.81 (talk) 05:01, 16 August 2010 (UTC)[reply]

1. Obviously, you should move elsewhere. 2. Yes, they can stream to their PC a list of the websites you visit. Comet Tuttle (talk) 05:43, 16 August 2010 (UTC)[reply]
You should consider using a secure tunnel to a trusted proxy server; or, use a secure tunnel to a Tor network. These will obfuscate your web viewing habits. Note that even if your connection is encrypted, the administrator can know what the destination of that secure tunnel is - so that is why you should use a proxy server. The administrator will only be able to know that you are making encrypted connections to the proxy - they will be unable to trace what the proxy is relaying for you. Your university may host remote-access servers, which you can use as secure proxies. Nimur (talk) 08:16, 16 August 2010 (UTC)[reply]
If you have or can get a Unix shell account from your school or your ISP, you can use the -D option of PuTTY or OpenSSH to turn that into a SOCKS proxy, which you can then use in the same way you'd use Tor (which also runs as a SOCKS proxy). The advantages are that it's much faster, and the proxy (which can see all of your traffic) is administered by your school or your ISP, instead of some random person who happens to be running a Tor exit node. -- BenRG (talk) 19:40, 16 August 2010 (UTC)[reply]

SAP

[edit]

What are the advantages of SAP Reporting tool? What are the types of SAP Reports available? Thank you for the answers. —Preceding unsigned comment added by 61.246.57.2 (talk) 05:38, 16 August 2010 (UTC)[reply]

Haskell functions - instances of the Eq class?

[edit]

I've been told that in general it isn't feasible for function types to be instances of the Eq class in general, though sometimes it is. Why isn't it feasible in general and when is it feasible? Surely functions are equal if they return equal values for equal arguments and not equal if they don't? SlakaJ (talk) 07:44, 16 August 2010 (UTC)[reply]

Function equivalence is undecidable in general. You can't compare every return value if the domain is infinite, and even if it's finite, the function might run forever when applied to certain arguments, and you can't (in general) tell whether it will run forever or just slightly longer than you've tried running it so far. You could write an instance like (Data a, Eq b) => Eq (a -> b) that would attempt to prove equivalence or inequivalence by trying every argument in turn, but it would fail (by running forever) in many cases. There are families of functions for which equivalence is decidable—for example, primitive recursive functions with finite domains—but there's no way to express constraints like that in Haskell. -- BenRG (talk) 19:28, 16 August 2010 (UTC)[reply]
Thanks very much SlakaJ (talk) 14:07, 17 August 2010 (UTC)[reply]

Maximum # of Cores (i.e. Logical Processors) & Amount of RAM in various Linux Operating-Systems

[edit]

Hi.

   I want to know what is the maximum number of processing-cores (i.e. logical processors, not physical sockets) and maximum amount of RAM which each of the following Linux operating-systems can support.

  1. Mandriva Linux One 2010
  2. Gentoo 64-bit Linux
  3. Ubuntu 10.04 Linux 32-bit Server Edition
  4. Ubuntu 10.04 Linux 32-bit Desktop Edition
  5. Ubuntu 10.04 Linux 32-bit Netbook Edition
  6. Ubuntu 10.04 Linux 64-bit Server Edition
  7. Ubuntu 10.04 Linux 64-bit Desktop Edition
  8. Fedora 13 Linux 32-bit GNOME Edition
  9. Fedora 13 Linux 32-bit KDE Edition
  10. Fedora 13 Linux 32-bit LXDE Edition
  11. Fedora 13 Linux 32-bit XFCE Edition
  12. Fedora 13 Linux 64-bit GNOME Edition
  13. Fedora 13 Linux 64-bit KDE Edition
  14. Fedora 13 Linux 64-bit LXDE Edition
  15. Fedora 13 Linux 64-bit XFCE Edition
  16. Debian 5.0.4 Linux 64-bit
  17. Sun Microsystems' OpenSolaris 2009.06
  18.    Thank you in advance to all respondents.

    Rocketshiporion
We have articles on all these operating systems (Mandriva Linux, Gentoo Linux, Ubuntu (operating system), Fedora (operating system), Debian & OpenSolaris) but, if system requirements are mentioned at all, it is always to define the minimum requirements and not the maximums. I also took a look at a few of the official sites, but again always the minimum requirements and not the maximums. Most distributions run community forums, so you could try asking there (for example, this post suggests the maximum addressable RAM on 32-bit Ubuntu, without using something called "PAE", is 4GB).
On another subject, it really is not necessary to write your post using HTML markup. Wiki-markup is flexible enough to achieve what you want and shorter to type (for example, simply preceed each line with a # to create a numbered list; no need for all that <ol>...<li>...</li><li>...</li></ol>). A brief guide can be seen on Wikipedia:Cheatsheet. Astronaut (talk) 12:07, 16 August 2010 (UTC)[reply]
PAE is Physical Address Extension. While (absent PAE) a 32bit OS can address 4Gb of memory, that doesn't mean 4Gb of RAM. That 4Gb address space also has to accomodate all the memory-mapped peripherals, particularly the apertures of PCI devices like the graphics adapter. So, in practice, while you can install 4Gb of RAM in a machine running a 32 bit OS, you'll actually see about 3.3 Gb of that. Precisely how much is a function chiefly of the motherboard and the installed adapter cards rather than the OS. For any purpose that needs lots of RAM (where 4Gb these days isn't lots at all) you'd want a 64bit OS. -- Finlay McWalterTalk 12:23, 16 August 2010 (UTC)[reply]
Number of processors and quantity of RAM is not determined by the distribution - it is determined by the kernel. You can "easily" swap out a different kernel to any of the above systems. All of the above distributions (except OpenSolaris, which uses the Solaris kernel), are default-installed with a Linux Kernel version 2.6, (and many will allow you to "easily" substitute a 2.4 kernel if you wanted to). The Linux 2.6 kernel requires SMP support to be enabled if you want to support multiple CPUs; but it can theoretically support an "arbitrary" number of symmetric multiprocessors (if you recompile the kernel, you can specify the maximum number of CPUs you want). The Kernel Configuration Option Reference tells you how to set up SMP (multi-processor) support if you are recompiling your kernel for any of the above. On the other hand, if you are using the kernel distributed with the "default" distribution, make sure that you select an SMP option; the compiled binary will probably have picked a "reasonable" maxcpus parameter. I have several SMP-enabled netbooks running Ubuntu, based on the default "Netbook" distribution - so it's really irrelevant which distribution you pick, if you switch the kernel.
While in theory, you can recompile a 2.6 kernel with maxcpus=arbitrarily_large_integer, it is very unusual to see any kernel binary that supports more than 32 logical x86 cores. At a certain point, if you want more than that, you will probably have a custom system-architecture, and should know what you are doing when re-engineering at the kernel level. Here is detailed information for the SMP linux system-designer (almost 10-years old and out of date, based on Kernel 2.2...). The MultiProcessor Specification defines the architecture for x86 cores; there are similar (but usually proprietary) specifications for MIPSes and ARMS and POWERs ... and Cell processors). The limiting factor will probably be your hardware - whether your BIOS supports symmetric access to physical memory; whether your CPU architectures have a hardware limitation for their cache-coherency protocol. Linux Kernel will abstract all of this (that is what is meant when the term "SMP" is used); but if the hardware does not support that abstraction, you will need to use a NUMA memory architecture and a multi-operating-system parallelization scheme ("node-level parallelism" - see ""why a cluster?") to manage your CPUs, because the actual circuitry does not support true shared-memory programming. With the magic of virtualization, you can make all those operating systems "look" like one unified computer (e.g., Grid Engine and its ilk) - but strictly speaking, these are multiple machines. Though the interface to the programmer is simple and appears to be one giant computer with thousands of CPUs, there is an obvious performance penalty if the programmers choose to pretend that a NUMA-machine is actually a shared-memory machine. Nimur (talk) 16:44, 16 August 2010 (UTC)[reply]
Thank you to Nimur for the information about the maximum cores and RAM being determined by the kernel. Then what is the maximum number of cores and maximum amount of RAM supported by the Linux 2.6.35.2 kernel? And the same in regard of the Solaris kernel? Rocketshiporion Tuesday 7-August-2010, 11:54pm (GMT).
As I mentioned, if you use the default, unmodified SMP kernel distributed with the distributions, the limit is probably 32 CPUs. You can recompile with an arbitrary limit. This will depend on the architecture, too; x86 CPUs use MPS, so 32 seems to be an "upper bound" for the present (2010-ish) system specifications. I suspect that as more Linux kernel hackers learn to love and hate QPI, there will be a major re-engineering effort of the kernel's SMP system (in the next year or two). To learn more about Kernel, consider reading The Linux Kernel, from The Linux Documentation Project (old, but introductory-level); or Basic Linux Kernel Documentation from the folks at kernel.org.
For main memory, x86_64 hardware seems to support up to 44 bits, or 16 terabytes of physical memory (but good luck finding hardware - motherboards, chipsets, and so on, let alone integrated systems); I've seen sparse reference to any actual hardware systems that support more than 64 GB (recent discussion on WP:RDC has suggested that 96GB and even 256 GB main-memory servers are on the horizon of availability). This forum (whose reliability I do not vouch for) says that the 64-bit linux kernels support up to 64 GB with x86_64 and 256 GB with AMD/EMT processors. If you want to dive off the deep-end, SGI/Altix supports up to sixteen terabytes of unified main memory in the Altix UV system (at a steep performance penalty). Commercial Solaris ("from Oracle") discusses maximum performance boosts for one to eight CPUs (though does not specify that as a hard upper-limit). They also support SPARC, x86, and AMD/EMT; their performance benchmarks make some vague claims about advanced memory technologies for large memory systems (without specifying a hard upper boundary). OpenSolaris uses an older version of the Solaris kernel; I can't find hard limits on upper-bounds for number of CPU or RAM (but suspect it's awfully similar to the Linux limitations). Since you're asking, here's why you'd want to use Solaris instead of a Linux kernel: fine-granularity control on SMP utilization. The system-administrator can control, to a much greater level than in linux, the specific processes that bind to specific physical CPUs, and how much time each process may be allocated. Linux kernel basically allows you to set a priority and all a "nice" for each user process, and then throws them all into a "free-for-all" at the kernel scheduler. Solaris gives you much more control (without going so far as to be a real time operating system - a trade-off that means a little bit less than 100% control and a whole let less work on the white-board designing process schedules). I have never personally seen a Solaris machine with more than a gigabyte of RAM (but it's been a long while since I worked with Solaris). Nimur (talk) 07:11, 18 August 2010 (UTC)[reply]

wget

[edit]

Why is wget v1.12 not available for windows yet? It was released a year ago. I read something about them not being able to port it, what does that even mean? I'm using v1.11 and it works ok on windows, but I want the new css support in version 1.12 82.44.54.4 (talk) 11:16, 16 August 2010 (UTC)[reply]

There are several posts about the Win32 native port of 1.12 on the wget mailing list. This one seems to explain it best - it seems the core of 1.12 introduced some changes that require individual platforms to adapt, and for Win32 "no one did the work." -- Finlay McWalterTalk 12:14, 16 August 2010 (UTC)[reply]
Adding to what Finlay wrote, if you're really desperate for Win32 version of 1.12 then you can get a development version here, although as with all pre-compiled binaries, use at your own risk (although they actually also include the build files so you could probably compile it yourself if you wish).  ZX81  talk 13:42, 16 August 2010 (UTC)[reply]
Cygwin's wget is version 1.12. -- BenRG (talk) 05:29, 17 August 2010 (UTC)[reply]

Problem with Google Chrome

[edit]

Every time I type certain Chinese characters in the pinyin input method Google Chrome crashes! Why is that? Kayau Voting IS evil 13:51, 16 August 2010 (UTC)[reply]

It's a computer bug. --Sean 18:15, 16 August 2010 (UTC)[reply]
For more technical information, see Google Chrome's bug-report - Chrome has had a long history of IME problems. It seems that Google Pinyin and Google IME might help. What IME are you using? Nimur (talk) 20:21, 18 August 2010 (UTC)[reply]

RAM

[edit]

hi all this is silly but i have a problem in understanding what does the RAM actually do?? Why do we always prefer for a RAM of bigger size? whats the use? whats the difference between the RAM and processor?```` —Preceding unsigned comment added by Avril6790 (talkcontribs) 14:40, 16 August 2010 (UTC)[reply]

The processor does lots of calculations (everything a computer does is arithmetic once you get down to the lowest levels). The RAM is for storing the instructions for the calculations and the data those calculations are being done on. It it much quicker to read and write information to RAM than to the hard drive, but if there isn't enough RAM to store all the instructions and data that the processor needs or is likely to need in the near future then it will have to use the hard drive (the "swap file", to be precise) to store the extra and that slows everything down. --Tango (talk) 15:04, 16 August 2010 (UTC)[reply]
RAM (Random-access memory) is short-term working space. Its size (in gigabytes) is a measure of how much information can be kept easily accessible. (Programs are themselves information, so even if a program isn't manipulating all that much, it can still take up space on its own.) Hard drives are bigger and more permanent, but immensely slower, because they have moving parts.
The CPU (Central processing unit) manipulates the contents of RAM. A faster CPU can perform computations more quickly.
Nothing could happen without either of them. As it happens, these days, the performance of personal computers for most tasks is limited by RAM size, because modern applications tend to be quite memory-hungry, people like to do tasks involving a large amount of data, and people like to leave a bunch of applications open at once. CPUs spent a great deal of time twiddling their thumbs, waiting for more information, and if the information is coming from RAM, rather than the hard drive, less time is wasted. Paul (Stansifer) 16:35, 16 August 2010 (UTC)[reply]
The "classical" analogy I use in explaining RAM, processing speed, and hard drives (which are all intertwined in practical usage) to people not very computer literate is as follows: imagine you are working at a desk, and your work consists of reading and writing on lots of paper. The desk has deep drawers that contain all of the stored paper you use. That is your hard drive. To use the paper, though, you have to put it on the surface of the desk. The size of the surface is your RAM. Once it is on the surface, there is a limit to how fast you can read, write, edit, whatever, as you go over the paper. This is your processor speed. It is an imprecise analogy in many ways, but perhaps it will be useful as a very basic approach to it. If the surface of the desk is too small, you're constantly having to use the drawers. This slows you down. If the surface is very large, you can have a lot of paper on top to access whenever you want it. If you yourself are quite slow, everything takes longer. And so on. "Faster" RAM involves you being able to move things around quicker once you have it on the surface of the desk. A "faster" hard drive means you can get things in and out of the drawers quicker. A multiple-core processor is kind of as if you, the worker, had been replaced by two or three people all working simultaneously (the main difficulty being that you can't usually all work on the same part of the same problem at once). --Mr.98 (talk) 22:38, 16 August 2010 (UTC)[reply]
Actually, this is not a bad analogy for someone who is knowledgeable about computers, 98! --Ouro (blah blah) 06:00, 17 August 2010 (UTC)[reply]
[edit]

I'm trying to fix a problem on a friend's computer. The main symptom is that, on Firefox, links on certain web pages don't work. The most notable of these is Google. Clicking on any result from a google search will cause the tab to say "loading" and the status bar to say "waiting for..." without any result. Apparently this also happens on other sites (but we couldn't find one to replicate this). This is a problem limited to Firefox, since I tried K-meleon and that works. Oddly, there is no IE on this computer; the application seems to have been accidentally deleted somehow? My friend suspects this is somehow relevant, but I doubt it, since I doubt Firefox depends on IE or any of its dlls, or if it does, K-meleon would too. One point that might matter is that the problem existed on an older version Firefox, and persisted after the update somehow. I tried disabling all add-ons, which didn't help. Any idea what causes this, or what to experiment with? (Supposedly the same problem is causing a general slowing down of browsing, but that might just be confirmation bias.) Card Zero (talk) 16:58, 16 August 2010 (UTC)[reply]

Go to Tools->Options->Network->Settings and try various options in there (if it isn't currently set to auto-detect, try that first, there may also be instructions from whoever supplies your internet connect on what those settings should be). I can't think why problems with the network settings would cause the exact symptoms you describe, but they could explain similar symptoms and it would certainly explain why it works in one browser but not another. --Tango (talk) 17:09, 16 August 2010 (UTC)[reply]
OK, auto-detect didn't help. I put it back to "use the system proxy settings". I'm not quite sure why this sort of thing would prevent links from working in google, while not preventing browsing as such. One can copy the links and paste them into a new tab, and that works; or perform a google search, quit (while saving the tabs) and restart, and the links on the google page work when it reappears that way. Meanwhile, my friend attempted a new install of Internet Explorer, and Avast has noticed the new file and reported it as a trojan ("Win32:Patched-RG [Trj]") - is that probably a false positive, or should I react to it? Card Zero (talk) 17:33, 16 August 2010 (UTC)[reply]
I'm perplexed by what you mean by "a new install of Internet Explorer". All versions of Windows (for much more than a decade) come with Internet Explorer and it is, essentially, impossible to remove (the most that can be done is to hide it). Some of the later versions of IE are optional downloads, but even then you generally get them using the Windows Update mechanism, or at least as a download from Microsoft's own site. If your friend has done anything else (like type "internet explorer download" into Google and blithely download whatever that finds) then that's sending him off into a vortex of malware and pain. The fact that Google doesn't work in Firefox is also curious, and leads me to wonder whether the system already has malware on it (redirecting search traffic is a common trick malware authors like to do). It sounds like this system needs a thorough spyware/malware/virus cleansing session. -- Finlay McWalterTalk 18:57, 16 August 2010 (UTC)[reply]
Oh, well there was no executable in the IE folder - that part can get deleted, right? - so he sought out an installer from Microsoft and ran it. I doubt it did much more than put the executable back in the folder. It's a fair point that this could in fact have been malware; it's now sequestered in Avast's vault, anyway, since nobody here actually wants to use IE. I'm going to search for rootkits with Rootkit Revealer when Avast has finished a thorough scan. It was run last week, and has found more malware since then, so that's pretty bad. Your advice on further free cleaning tools is welcome. To add insult to injury it appears to be shitty malware that can't even redirect properly. 86.21.204.137 (talk) 20:52, 16 August 2010 (UTC)[reply]

The next day

[edit]

So I spent most of a day trying to fix that, and came home. All we had achieved was to do a thorough scan with Avast, completely uninstall Firefox, add a couple of alternative browsers, and install Firefox again. Now, apparently, the computer is crashing a lot, and both Opera and the re-installed Firefox are suffering from the same problem with google links, although K-meleon is mysteriously unaffected. Any advice on what to (tell my friends to) try next?
I did attempt to use Rootkit Revealer, and it found five discrepancies, then it refused to save its log file (invalid volumes, or something) and crashed. At least one of the things it found, some googling suggested, was a false positive. The others sounded like harmless things (although, perhaps, harmless hijacked things), and it was after midnight at this point so we said "that's probably fine" and did nothing. What I wonder is: are rootkits actual noted much in the wild, or is this line of investigation probably a wild goose chase? Card Zero (talk) 23:04, 17 August 2010 (UTC)[reply]

Automated text input program

[edit]

Hi, does anyone know if there is a [freeware] script/software that can input pre-written text into a browser running Javascript? My knowledge is very limited on this topic so apologies if I'm not making sense to more erudite users. Basically, in a text field, I want to write something, wait for a few seconds, write something else, wait again, and write something else again but all automated on a continuous loop obviously. Thanks very much in advance and I will check this section periodically if you require any more classification. Thank you! 81.105.29.114 (talk) 17:37, 16 August 2010 (UTC)[reply]

You mean something like Google Docs - basically a web-version of MS Office? -- kainaw 17:53, 16 August 2010 (UTC)[reply]
Nope :-) - The thing I'm thinking of is kinda like a macro script for cutting down repetitive manual input but completely automated with a setting that allows a few seconds delay inbetween each input. But thank you anyway. 81.105.29.114 (talk) 18:06, 16 August 2010 (UTC)[reply]
This would be straightforward to write in Greasemonkey, but I'm not aware of any canned solution to this particular problem. --Sean 18:18, 16 August 2010 (UTC)[reply]
That could work, thanks. Alternatively, just a simple program that could just enter predetermined text into a field? Rather than a program inside of the browser as a plug-in. 81.105.29.114 (talk) 18:42, 16 August 2010 (UTC)[reply]
AutoIt maybe? -- 78.43.71.155 (talk) 21:06, 16 August 2010 (UTC)[reply]
I will try that out too, sir, thank you. 81.105.29.114 (talk) 21:13, 16 August 2010 (UTC)[reply]

random number generator problem?

[edit]

I was bored on a long car trip a few days back and using a TI-89 calculator, I wrote a program where the calculator would use its built in random number generator to select either the number 1 or 2. If it selected 1, it would increment a counter. The program would run this loop 10,000 times, and give me the value of my counter, Thus telling me how many times the random number was 1. The results I got were very interesting, and I was wondering if anyone could tell me why. I ran the program 20 times and there results were as follows...

Test 1, 50.64% #1
Test 2, 52.33% #1
Test 3, 51.73% #1
Test 4, 50.72% #1
Test 5, 51.02% #1
Test 6, 49.97% #1
Test 7, 50.92% #1
Test 8, 51.07% #1
Test 9, 52.02% #1
Test 10, 51.78% #1
Test 11, 50.63% #1
Test 12, 51.00% #1
Test 13, 51.15% #1
Test 14, 50.87% #1
Test 15, 50.25% #1
Test 16, 50.91% #1
Test 17, 50.80% #1
Test 18, 51.23% #1
Test 19, 50.82% #1
Test 20, 51.01% #1

There seems to be a very real bias towards 1 vs 2 in that #2 was selected more only 1 time out of 20, though #1 never ended up being selected more by a very huge margin. Is there a problem with the built in random number generator, or is this amount of testing not enough to be statistically significant? Googlemeister (talk) 18:37, 16 August 2010 (UTC)[reply]

Exactly what did you use to generate the random number? -- kainaw 18:57, 16 August 2010 (UTC)[reply]
Your calculator won't be generating true random numbers, but rather pseudorandom numbers - basically they seem random, but they're not. However I really think you need to use a larger sample to be able to say anything conclusive about the randomness, although you're generating 10,000 random numbers per test, you're only comparing 20 results and the average of those 20 is 51.0435% which (to me) is still pretty close to 50% so I don't think anything strange about it (yet).  ZX81  talk 19:07, 16 August 2010 (UTC)[reply]
I am not sure I agree. Only 1 out of 20 on what should be a 50/50 shot is a probability on the order of 2^19th right? 1 in 500,000? Googlemeister (talk) 20:34, 16 August 2010 (UTC)[reply]
I am betting that one of compatriots at the Math Desk could tell us for sure, using all that fancy statistics for detecting randomness that has been developed from the Pearson's chi-square test test and onward. --Mr.98 (talk) 22:21, 16 August 2010 (UTC)[reply]
I admit it has been a good many years since I've looked at this sort of stuff, but I'm sure I remember that when it comes to randomness having a big enough sample is very import because it's random (I know this is the worst explanation ever!) Although you would eventually expect the averages to be 50/50 with a big enough sample size, if something is truely random then getting 20 x 1 in a row is just as likely as getting 20 x 2, it's just not what you'd expect, but it is nethertheless random. With a big enough sample size you could expect the results to be more equal, but otherwise... I'm probably not explaining my reasoning very well am I? Sorry, Mr.98's idea about the Math's desk is probably a better idea!  ZX81  talk 22:37, 16 August 2010 (UTC)[reply]
Depending on the way in which the calculator generates its pseudorandom numbers, and depending on how you implemented it, there may in fact be a bias. My understanding (following Knuth and others) is that many of the prepackaged "Rand" functions included in many languages are not very statistically rigorous. I don't know if that applies to the TI-89 though. --Mr.98 (talk) 22:26, 16 August 2010 (UTC)[reply]
It's also possible that the random numbers are perfectly fine (unbiased, or zero mean, and otherwise "statistically valid" random numbers). But Googlemeister's description of his/her algorithm might not be entirely perfect. A tiny systematic bias screams "off-by-one error" to me - are you sure you normalized your values by the correct amount? That is, if your for-loop ran from 0 to 100, did you divide by 100 or 101? (Similar logic-errors could crop up, in other ways). Another probable error-source is floating-point roundoff. The Ti-89 is a calculator - so it uses a floating point representation for its internal numeric values. There are known issues related to most representations of floating-point: if you add a small number (like "1" or "2") to a large number (like the current-sum in the loop), you may suffer a loss of precision error that is a "design feature" of floating-point representations. Ti-89 uses a 80-bit binary-coded decimal float format; unlike IEEE-754, this format's precision "pitfalls" are less widely-studied (but certainly exist). Nimur (talk) 01:01, 17 August 2010 (UTC)[reply]
The chance of 200,000 flips of a fair coin deviating more than 1% from 50% heads is less than 1 in 1018, so yes, this is statistically significant. Did you write rand(100) >= 50, by any chance? That will be true 51% of the time. -- BenRG (talk) 05:51, 17 August 2010 (UTC)[reply]
Pseudorandom is really a bit of a misnomer, if it's not truly random then to what degree is it random? That's the question you are answering here. I remember playing a similar game with the Vbasic RNG and finding that depending on how it was seeded, it LOVED the number 4 (for a single digit rand() call) so anything involving the term pseudo should be taken with a grain of salt. I would think that a slower-moving, more dedicated system like a graphing calculator would have an especially hard time coming up with non-deterministic randomness without a lot of chaotic user input as a seed. --144.191.148.3 (talk) 13:39, 17 August 2010 (UTC)[reply]
I will have to investigate more in depth next time I have significant downtime. Googlemeister (talk) 13:42, 17 August 2010 (UTC)[reply]
I assume you know how to make a fair pseudo-coin from a biased but reliable coin? Throw twice, discard HH and TT, count HT as H, TH as T. --Stephan Schulz (talk) 13:50, 17 August 2010 (UTC)[reply]
Hey, that's clever, thanks. I never knew that. Comet Tuttle (talk) 17:56, 17 August 2010 (UTC)[reply]
You're effectively taking the first-derivative of the coin value. This works if the bias is exactly and only at zero-frequency; (in other words, a preference for heads, or a preference for tails, but independent of previous results). If there is a systemic higher-order bias, (in other words, if the distribution of head and tail is pathological and has time-history), you won't actually be guaranteeing 50-50 odds! All you did is high-pass-filter the PRNG. For a coin, this is a non-issue - but for a PRNG, this is a serious issue! Nimur (talk) 18:26, 17 August 2010 (UTC) [reply]

date

[edit]

In a batch file, how can I make the date display like "2010 - August"? 82.44.54.4 (talk) 19:46, 16 August 2010 (UTC)[reply]

If all you need is the different formatting, and not the name of the month, you could try:
FOR /F "tokens=1-3 delims=/" %%G IN ('echo %DATE%') DO echo %%I - %%H - %%G
Note that you need to replace the / after delims= with the date separator for your locale (rund date /t and see which character separates the numbers), and you might have to shuffle %%I, %%H, and %%G around depending on your locale as well (some use MM-DD-YYYY, others DD-MM-YYYY, etc.)
Also, if you want to try it on the command line, you have to use single % signs instead of %%. -- 78.43.71.155 (talk) 21:01, 16 August 2010 (UTC)[reply]
That locale-shuffling behavior is reason enough not to do this: you will have written an unpredictable and non-portable script whose execution depends on users' settings. It would be preferable to design a system that doesn't rely on such assumptions, if you plan to distribute this script, or use it for anything non-trivial. Nimur (talk) 00:58, 17 August 2010 (UTC)[reply]
You could add some findstr nastiness followed by a few if/else constructs triggered by findstr's errorlevel, assuming that MM/DD/YYYY always uses the "/", DD.MM.YYYY always uses the ".", and YYYY-MM-DD always uses the "-" (and don't forget to catch a "no match" situation in case you run into an unexpected setting). Checking if that really is the case, and coding that nasty beast of code is left as an exercise to the reader. ;-) Of course, if Nimur knows of a solution of the kind that he considers preferable (see his post above), I, too, would be interested in seeing it. :-) -- 78.43.71.155 (talk) 09:14, 18 August 2010 (UTC)[reply]
(Sadly, my solution in this case would be to use Linux. date, a standard program, permits you to specify the output-format, and is well-documented. But that is an inappropriate response to the original poster, who specifically requested a batch-script solution!) One can find Windows ports of date in the Cygwin project; I am unaware of standalone versions. Nimur (talk) 20:17, 18 August 2010 (UTC)[reply]

Trying to get iPhone 3G to connect to home WiFi unsuccessfully

[edit]

When it says "Enter the password for [my network]," isn't that my router's password, i.e., the password I use to get to the router settings? That's the one that gets my laptop to access my network, but the iPhone keeps saying "Unable to join the network '[my network]'." Thanks. 76.27.175.80 (talk) 22:26, 16 August 2010 (UTC)[reply]

No, it's actually asking your wireless encryption key (called either WEP, WPA or WPA2), but you should be able to get this from logging into the router.  ZX81  talk 22:30, 16 August 2010 (UTC)[reply]
Thank you! 76.27.175.80 (talk) 22:38, 16 August 2010 (UTC)[reply]
My WPA key is printed on the back of my router.--85.211.142.98 (talk) 05:51, 18 August 2010 (UTC)[reply]

Color saturation on television sets.

[edit]

In additive color (video/film color resolution) the primary colors are Red, Green, and Blue, and the secondary colors are Cyan, Magenta, and Yellow.

For the longest time, TVs only displayed the primary colors (yielding a saturation of approximately 256 thousand colors). With the advent of HDTV in the early 2000s, however, I heard talk of how TVs would soon diplay both primary and secondary colors soon (yielding a saturation of approximately 3 trillion colors).

For years since, though, I heard nothing about this. Not only that, but recently a manufacturer announced that it was moving from RGB displays to RGBY displays (by adding in yellow). Does this mean RGBCMY is dead? Pine (talk) 23:14, 16 August 2010 (UTC)[reply]

I don't think that your use of the word saturation is the common meaning. In any event, our eyes (except for those of tetrachromats) can only see primary colors. The way we detect, say, cyan, is by observing the presence of both blue and green. So there's no obvious benefit to a display having elements that can emit blue+green, but not just one of them. I can imagine having a dedicated way to produce cyan could extend the gamut of a display slightly, but only if there are some intrinsic flaws in the display technology, and probably at a great cost to resolution.
The number of distinct (human-distinguishable) colors produced by a display is only limited by the number of distinct brightness levels for each color element. If I recall correctly, current display technology is able to create adjacent colors that we can barely (if at all) distinguish already, so increasing the number of colors displayed isn't very useful. Extending display gamut would be much more useful, but I don't understand the concept very well myself. Paul (Stansifer) 00:18, 17 August 2010 (UTC)[reply]
You might find the articles Quattron and Opponent process interesting. Exxolon (talk) 00:30, 17 August 2010 (UTC)[reply]
(ec) The RGBY displays are called Quattron. I don't think RGBCMY was ever "alive". You can get a full range of colors with just three primaries because there are just three cone types in a normal human eye. Theoretically you can improve color reproduction by adding primaries beyond three, but not by very much (not by nearly as much as you can by going from 1 to 2 or from 2 to 3). According to the WP article, twisted nematic LCD screens only have 64 brightness levels per primary, for a total of 64³ = 262,144 levels. That might be where your 256,000 figure came from. I have no idea where 3 trillion came from. The main point of adding more primaries is to widen the color gamut, not to increase the "number of colors" (though they would no doubt market it based on the number of colors, if that number happened to be higher than the competition's). -- BenRG (talk) 00:38, 17 August 2010 (UTC)[reply]

FIPS

[edit]

Can a civilian [legally] use Federal Information Processing Standard (FIPS) for personal use? What's the pros and cons of using FIPS? On Win7, the computer lets me enable "FIPS" for my Netgear WNR1000. --Tyw7  (☎ Contact me! • Contributions)   Changing the world one edit at a time! 23:38, 16 August 2010 (UTC)[reply]

It looks like the WNR1000 implements FIPS 140-2. That's for communication between compliant equipment. So it's not a pro-or-con thing, it's a matter of whether you need to connect to a FIPS 140 compliant counterpart. -- Finlay McWalterTalk 00:01, 17 August 2010 (UTC)[reply]
So if my computer supports "FIPS" I should enable it? --Tyw7  (☎ Contact me! • Contributions)   Changing the world one edit at a time! 00:56, 17 August 2010 (UTC)[reply]
You have no need to enable it. FIPS is a sort of "audit" to automate and accredit that the equipment meets certain federal requirements for information security. In and of itself, FIPS does not secure any information; it just verifies whether your router is capable of meeting certain standards. You can think of it as a "standard test" that complies with a government regulation. Nimur (talk) 23:08, 17 August 2010 (UTC)[reply]