Wikipedia:Reference desk/Archives/Computing/2013 January 18
Computing desk | ||
---|---|---|
< January 17 | << Dec | January | Feb >> | January 19 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
January 18
[edit]What does deepweb mean
[edit]What exactly is the deep web? The wikipedia article deep web is very complex, what exactly is it!? — Preceding unsigned comment added by EatIcecream2 (talk • contribs) 00:02, 18 January 2013 (UTC)
- Roughly put, it's Internet pages which are possible to access with a browser but for various reasons don't appear in Google searches. PrimeHunter (talk) 00:52, 18 January 2013 (UTC)
How is a saved web page and its files folder linked?
[edit]When I save a webpage, I would have an htm file and a files folder. Whenever I move or delete the htm, the files folder is also moved/deleted. How is this done? Can I apply this to other files and folders?--124.172.170.234 (talk) 00:49, 18 January 2013 (UTC)
- Details may depend on the operating system. File and folder don't move together for me in Windows Vista but I get the warning mentioned in this page about the feature: http://superuser.com/questions/156948/linked-files-and-folders-in-windows-vista. As a test I manually (not with a browser) created a .htm file and afterwards a folder with the matching _files name. They were linked in the same way. The test shows it only depends on the matching names and not something else saved or done by the browser when you save a complete web page. I don't know whether the same effect can be achieved without the matching names. PrimeHunter (talk) 01:15, 18 January 2013 (UTC)
- Microsoft call the folder and associated files a "thicket", and the Windows default is to move them together to avoid orphaned files and incomplete web pages. They all move together on the unmodified Windows Vista installation I'm using here, but I get a warning if I try to rename a thicket folder, so the matching of names is critical. I don't know how this is implemented -- is it just a subroutine that looks for a folder of the same name in the same location for .htm and .html filename extensions? Dbfirs 08:15, 18 January 2013 (UTC)
Merging files
[edit]Is there a simple way to merge a large number of small text (or csv) files (e.g. all of the files from one folder) into one large text file? I don't mind what order they end up in. I've got various text/word processors but none of them seem to offer this function. I'm using Windows 7.--Shantavira|feed me 10:14, 18 January 2013 (UTC)
- Forget that. I've now rediscovered the DOS command that does this.--Shantavira|feed me 10:29, 18 January 2013 (UTC)
- According to netiquette, you should explain explicitly what the solution was. ¦ Reisio (talk) 16:22, 18 January 2013 (UTC)
- You can concatenate files by saying "copy in1+in2+... out". Add /b if they're not text files. -- BenRG (talk) 17:47, 18 January 2013 (UTC)
- May we mark this Q resolved ? StuRat (talk) 18:21, 18 January 2013 (UTC)
- Sorry, solution I found was copy *.txt all.txt.--Shantavira|feed me 20:20, 18 January 2013 (UTC)
Music Production Software id request
[edit]Hi. Can somebody tell me which DAW is being used here? Thanks! — sparklism hey! 14:30, 18 January 2013 (UTC)
- Protools; compare that shot to the first screenshot in this review. -- Finlay McWalterჷTalk 14:37, 18 January 2013 (UTC)
- Spot on, thanks! — sparklism hey! 14:42, 18 January 2013 (UTC)
Google search by image!
[edit]How does Google's search by image work? Do they use SURF? --Tito Dutta (talk) 16:07, 18 January 2013 (UTC)
- It's not clear which technologies they use (I think they're using a combination). See Content-based image retrieval and Structural similarity. They may also be using structural analyses like Haar-like features. I don't think they've said much about what they do use. Tineye is similar but produces somewhat results, so they're clearly doing things differently. Bing used to have this feature, but I think they removed it - if I'm remembering correctly, it was more apt to finding images the same colour as the original, rather than the same shape, suggesting it used a quite different scheme again. But all these companies are private, and it's in their interest to keep the workings of their operation a trade secret. -- Finlay McWalterჷTalk 16:32, 18 January 2013 (UTC)
- I did a little experiment, using a popular Sports Illustrated swimsuit image that I know Google Images has seen in many places. I messed with the original image in different ways, to see if it would still recognise the famous model. Here's what I found (making changes in GIMP):
- negative (colour->invert): no
- mono (black and white): yes
- altered hue (colors->hue-saturation, drag hue about 80 units one way): yes
- detected edges: no (but I don't think that's very informative)
- posterise (to gimp's #3 value): yes
- upside down: yes
- very low jpeg quality (3% on gimp's slider): yes
- blurred: yes
- cropped (a smaller image (about 40% of the area of the original) cropped from the original): yes
- To me, this all suggests the analysis is based on statistics comparing relative lightness ("value") in the image - not colour, not edges, not image metadata or name. -- Finlay McWalterჷTalk 18:06, 18 January 2013 (UTC)
- Quick comment - rotating the image results in positive detection. So we can surmise that the Google algorithm uses one of two common techniques: (1) they have contrived a rotationally invariant transform to "fingerprint" the image in a way that's agnostic to its rotation. That technique is very mathematically hard, and requires a lot of compute-power; but from what we know of Google, neither of those is necessarily an insurmountable obstacle. Alternatively, (2) the search algorithm can pre-rotate the input image 90 degrees, 180 degrees, and 270 degrees, and run four simultaneous search queries against the known-image database. (One can express this process mathematically as an "almost rotationally-invariant" transform; or simply as a for-loop, or a parallelized parameter sweep). This quadruples the computational load, but is a quick-and-easy solution to the problem. For fun, try rotating the image by 45 degrees (or, 71.319 degrees, or something crazy), and see if the search still works! Nimur (talk) 18:45, 21 January 2013 (UTC)
- Also, most JPEG decoders can spit out YUV as an intermediate form of the image; (and even if the input isn't a JPEG, any image can be converted to YUV). In the YUV color-space, you get brightness, or "luma", for free - just by throwing away half the image. What this often means to an image-processing algorithm is: "use half the data, therefore complete the algorithm twice as fast..." So, relying only on brightness, while throwing away chroma (color) information is very common trick in image processing and computer vision. Nimur (talk) 18:48, 21 January 2013 (UTC)
- Quick comment - rotating the image results in positive detection. So we can surmise that the Google algorithm uses one of two common techniques: (1) they have contrived a rotationally invariant transform to "fingerprint" the image in a way that's agnostic to its rotation. That technique is very mathematically hard, and requires a lot of compute-power; but from what we know of Google, neither of those is necessarily an insurmountable obstacle. Alternatively, (2) the search algorithm can pre-rotate the input image 90 degrees, 180 degrees, and 270 degrees, and run four simultaneous search queries against the known-image database. (One can express this process mathematically as an "almost rotationally-invariant" transform; or simply as a for-loop, or a parallelized parameter sweep). This quadruples the computational load, but is a quick-and-easy solution to the problem. For fun, try rotating the image by 45 degrees (or, 71.319 degrees, or something crazy), and see if the search still works! Nimur (talk) 18:45, 21 January 2013 (UTC)
- Further, I've experimented with tweaking the lightness of the image to see how far I can push it before Google fails to recognise it. I used ImageMagick thus: convert -modulate 300 original bright300.jpg I found, for my test image, that Google would recognise the image up to a value of 300, which is subjectively very bleached out (and I think is already past the point at which many values are being top clamped). Above that it only offers "visually similar images", which are images that are similar only in as much as the colours and their very rough distribution is somewhat similar to the input image. -- Finlay McWalterჷTalk 21:53, 18 January 2013 (UTC)