Wikipedia talk:Notability (academic journals)/Archive 4
This is an archive of past discussions on Wikipedia:Notability (academic journals). Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 | Archive 5 | Archive 6 |
"always qualifies", how it is used, where it came from, and how that fits with WP generally
Apologizing in advance for this long thing.
We all know that in general WP is pretty allergic to rules that are applied mechanically. The community has some very bright lines that are "rule"-like (like, umm OUTING). We of course have strong policies that are very near-rule like, but they tend to be broad statements that need application. They aren't binary "rules".
Everybody here is busy and there are definitely subcultures and norms that develop in these subcultures. Some of that is subject matter-based, and some is just what humans tend to do. But these differences are really clear, when people who don't usually edit that area, come and try to do stuff. I experience this all the time in my editing about health - some people who don't edit about health and try to, are really baffled by MEDRS and they sometimes get angry -- I can feel the "what the hell?!?" reaction building in people who are used to editing without sourcing at all, or not really thinking about source quality. Or who do think about source quality but are unfamiliar with MEDRS. And sometimes they get all fired up and come and try to change MEDRS, which is (usually) a huge waste of everyone's time. I get all this. With respect to MEDRS, when people are upset, I go out of my way to try explain. I even wrote an essay about it (WP:Why MEDRS?) in order to explain. I get it, that this bright line thing (really the "no" it generates) is weird to people but very normal and even important to folks in this project.
Looking at the recent blow-up, what I think has upset people who are not part of the regular group who work on journals, is hitting the wall of this "always" business with regard to JCR impact factor, which has indeed been applied like a rule to automatically confer Notability and try to end discussions. If folks who are part of the group that works on journals are unaware of this - that "always" rule is jarring - it feels weird - it feels like a walled garden kind of thing.
In the 1st AfD, the "rule" was applied as a rule this !vote and this !vote and is ~probably~ (?) the cause of the rather shockingly rapid close (which didn't cite the "rule"). I'll note that this !vote did the "rule" thing some, but added some nuance. This !vote was the most ... "normal"-for-an-AfD !vote in the 1st AfD.
In the 2nd AfD the same dynamic is playing out with !votes like this, and this and this. And again we have a comment at the 2nd AfD that is .. well, "normal"-for-an-AfD, in that it didn't treat "always" like a rule that can be applied to automatically confer notability.
Other WikiProjects (like the radio people) have developed these kinds of "always" rules for notability too. It is jarring to encounter them.
So where did the two instances of "always" confers Notability come from in this essay?
condense: the Nobel "always" came from WP:PROF from which this was copied in August 2009; the "always" with regard to JCR impact factor was added in the first month, was not discussed then, and has never been discussed |
---|
The following discussion has been closed. Please do not modify it. |
From the first day, it contained the sentence: " For the purposes of Criterion 2, major academic awards, such as the Nobel Prize, MacArthur Fellowship, the Fields Medal, the Bancroft Prize, the Pulitzer Prize for History, etc, always qualify under Criterion 2. Some lesser significant academic honors and awards that confer a high level of academic prestige also can be used to satisfy Criterion 2. Examples may include certain awards, honors and prizes of notable academic societies, of notable foundations and trusts (e.g. the Guggenheim Fellowship, Linguapax Prize), etc. Significant academic awards and honors can also be used to partially satisfy Criterion 1 (see item 4 above in this section)." That makes sense, coming from PROF. (but hm, an "always") That language was added to PROF originally in this dif in August 2008 with edit note "Boldly install the new version". There was ~fairly~ robust discussion of that major revision, including this "always", in several sections of archive 5 of the related talk page. WP:PROF also contained, well before that, a Caveat section that included the following:
Which was initially adopted here as follows:
That language came into this essay from day 1, and although it has been moved around and changed some, it is really important. That caveat was an effort to speak to the broader community. So after WP:PROF was copied here, it was worked over, and in the first set of diffs adapting this to Journals, the following sentence was added in this dif, apparently following the logic of the above "always" clause: "For the purpose of Criterion 1, having an impact factor assigned in Thompson Scientific's highly-selective Journal Citation Reports always qualify under Criterion 1." So that 2nd "always" clause has been in there from the beginning. I checked the archive, and it was never discussed. (!) |
This is kind of tldr, but why I am trying to communicate is, that
- 1) "always" X, applied as a rule, is not normal in WP broadly
- 2) please consider getting rid of it (and I am mean please actually consider it - please have a robust discussion);
- 3) if you are going to keep it,
- i) please consider holding an RfC to get community buy-in (I might initiate that myself is someone native to this project doesn't);
- ii) please develop ways of explaining it - you might want to even add that explanation to this essay, and a practice of explaining it, when people are unaware of it.
Thanks. tldr - Jytdog (talk) 05:34, 15 December 2016 (UTC)
- Thanks for this post, the history is illuminating. Probably a good start for the discussion that we need. Unfortuantely, I'm leaving tomorrow for holiday travel to visit family and friends, so I won't be able to contribute much in the coming days. --Randykitty (talk) 14:19, 15 December 2016 (UTC)
- Thanks for your note. Sorry it is so "preachy" but everybody is here is good people and there is just sub-cultures clashing i think. Jytdog (talk) 19:33, 15 December 2016 (UTC)
- Don't worry, I've got a thick skin and as long as people talk about arguments and not the person, then "everybody is here is good people and there is just sub-cultures clashing" is exactly how I take it. We'll find a compromise that everybody can live with, I'm sure. --Randykitty (talk) 21:12, 15 December 2016 (UTC)
- I agree Jytdog (talk) 23:30, 15 December 2016 (UTC)
- That's absolutely how I view it, and I'm sorry if I gave any other impression. Guy (Help!) 21:48, 16 December 2016 (UTC)
- Don't worry, I've got a thick skin and as long as people talk about arguments and not the person, then "everybody is here is good people and there is just sub-cultures clashing" is exactly how I take it. We'll find a compromise that everybody can live with, I'm sure. --Randykitty (talk) 21:12, 15 December 2016 (UTC)
- Thanks for your note. Sorry it is so "preachy" but everybody is here is good people and there is just sub-cultures clashing i think. Jytdog (talk) 19:33, 15 December 2016 (UTC)
- Wait, are you saying that the impact factor always notable is a copy-paste from Nobel laureates are always notable? If so, that might possibly be the most inane thing I have ever seen on Wikipedia. The difference beteen having an impact factor of 0.124 and having a Nobel Prize is pretty bloody obvious. Please tell me that's not what happened? Guy (Help!) 21:43, 16 December 2016 (UTC)
- From what I have checked from Jytdog's analysis, that is what happened. (This is a common fallacy when others start new notability proposals too, they copy language without recognize the case. The Nobel Prize award is a highly unique case that has very little analogy elsewhere). This is why this guideline is extremely problematic as its written not towards selectiveness but inclusivity. --MASEM (t) 21:48, 16 December 2016 (UTC)
- Given that history, I think it can be removed immediately and without any further need for discussion, not least because a Nobel Prize is a binary (you have or you don't) thing that gets awarded to a handful of people a year, whereas an impact factor is a linear metric that applies by now to tens of thousands of journals, with most papers in some listed journals never cited at all. Guy (Help!) 21:52, 16 December 2016 (UTC)
- Sorry, ,that figure is widely off. The JCR at this moment contains analyses of 11997 journals, not "tens of thousands". --Randykitty (talk) 12:42, 22 December 2016 (UTC)
- The impact factor has never been compared to a nobel prize, and that's not why we mention it. It's because Journal Citation Reports is on of the most selective, if not the most selective when it comes to including journals for consideration. Headbomb {talk / contribs / physics / books} 22:50, 16 December 2016 (UTC)
- Historically that is exactly where it came from. Look at this dif at 09:24 on 26 August 2009 where the "always" IF was added, and just a few diffs later at 10:13, 26 August 2009 in this dif the Nobel "always" was removed. It is clear transfer of the "always" from PROF to this essay; the presence of the "always" is not normal in WP and is like a prion here. Jytdog (talk) 23:11, 16 December 2016 (UTC)
- It's still not why we have it, but rather because Journal Citation Reports is on of the most selective, if not the most selective when it comes to including journals for consideration. Headbomb {talk / contribs / physics / books} 23:17, 16 December 2016 (UTC)
- And to show truly how a selectivity as a criteria is the point of this, WP:NASTRO based itself on WP:NJOURNALS's selectivity idea [1]. Headbomb {talk / contribs / physics / books} 23:21, 16 December 2016 (UTC)
- As selective as the Nobel Committee? And when you say selective, it includes t last count over 11,000 journals, which is a lot. The highest IF is over 130 (CA - a cancer journal for clinicians). I have seen impact factors of 0.12 and less. There are examples where publications on Beall's list have had impact factors assigned, and a fair number have been removed over the years. That does not look like a "highly selective" list, and certainly not to the point where inclusion confers automatic notability, as it does for a Nobel laureate. Guy (Help!) 23:31, 16 December 2016 (UTC)
- It's still not why we have it, but rather because Journal Citation Reports is on of the most selective, if not the most selective when it comes to including journals for consideration. Headbomb {talk / contribs / physics / books} 23:17, 16 December 2016 (UTC)
- Historically that is exactly where it came from. Look at this dif at 09:24 on 26 August 2009 where the "always" IF was added, and just a few diffs later at 10:13, 26 August 2009 in this dif the Nobel "always" was removed. It is clear transfer of the "always" from PROF to this essay; the presence of the "always" is not normal in WP and is like a prion here. Jytdog (talk) 23:11, 16 December 2016 (UTC)
- Given that history, I think it can be removed immediately and without any further need for discussion, not least because a Nobel Prize is a binary (you have or you don't) thing that gets awarded to a handful of people a year, whereas an impact factor is a linear metric that applies by now to tens of thousands of journals, with most papers in some listed journals never cited at all. Guy (Help!) 21:52, 16 December 2016 (UTC)
- Example: [2] is still listed as of 2016. OMICS Group is probably the best known predatory open access publishing company in the world. Selective? Guy (Help!) 23:35, 16 December 2016 (UTC)
- Notability is independent of reliability. We have plenty of articles on crank journals (see Journal of Global Drug Policy and Practice / Journal of Cosmology, and I'd argue it's especially important to have those. As for OMICS, it's a shit publisher for the most part, but some of their journals seem to make more sense then others. I'm no biologist, so I can't say for myself if OMICS: A Journal of Integrative Biology is reliable or not, but it's certainly a notable one. It's likely included because it's published by Mary Ann Liebert, not OMICS.Headbomb {talk / contribs / physics / books} 23:39, 16 December 2016 (UTC)
- Since this is the section about "always", am providing dif here of Headbomb's consent to changing "always" and dif of Guy's statement of intent to change it. (sorry, i think about people finding stuff later) Jytdog (talk) 00:09, 17 December 2016 (UTC)
- So we have to keep this nebulous phrase because one editor wants it that way? I do not agree with this and this already causing a problem in one discussion [3]. Just because, the criterion says "always" doesn't preclude an editor using their own judgement in a deletion discussion. NJOURNALS is meant to be a guide when it comes to deletion discussions. The wording right now appears to be pretty much useless. Saying "reasonably reliable indication of significance" understates the significance of JCR impact factor, is a phrase that is unclear in its meaning, and consensus should be sought for change from "always' not the other way because one or two editors decides to change it. Change to "almost always" for now - get rid of this unclear phrasing that means nothing. I think consensus should have been sought for such a change because this is central to NJOURNALS. Also, impact factor does indicate that the journal has an 'impact' in its field where hundreds and hundreds of other journals do not have an impact factor at all. This makes a difference in keeping a journal article. Steve Quinn (talk) 05:16, 22 December 2016 (UTC)
- Also, Jytdog - Headbomb said it was OK to change it to "usually" not the nebulous, meaningless phrase that is there now. This is not the agreed upon change. So please change it to either "usually", or "almost always". I don't see consensus here for the phrase currently in this guideline. Steve Quinn (talk) 05:21, 22 December 2016 (UTC)
- Thanks for talking Steve Quinn. If what you mean by "useless" is "it is a not a "rule" anymore," that is indeed the point; 1) the "rule" thing is extremely unWikipedian and 2) this "rule" arose in this essay in a kind of outrageous and invalid parallel with the Nobel Prize. Do you see all that? Jytdog (talk) 05:42, 22 December 2016 (UTC)
- I don't see any correlation between this and the Nobel Prize, although I did unsuccessfully try to look at diffs provided. I will try again. Also, I have never seen anybody call this a rule before. As with any guideline - it is a guide and every editor should use their best judgement. In every case, except the one that I linked to, impact factor clearly indicates the journal has an impact, where hundreds of journals do not have an impact factor (at all). The journals without an impact factor do not ususally make the cut on Wikipedia, although some do. I have worked on or with this project for years. Steve Quinn (talk) 05:49, 22 December 2016 (UTC)
- I hope this change didn't come about just because one journal is obviously fishy and should not have an impact factor - although it advertises that it does on its site. Steve Quinn (talk) 05:51, 22 December 2016 (UTC)
- The point that is made is that "impact factor" is only a metric seemed significant by a very tiny minority (those interested in the informational sciences). Having a high IF has no demonstrated coorelation to being the subject of significant coverage in secondary sources, which is what notability is supposed to be built on, not a analytical value. --MASEM (t) 06:15, 22 December 2016 (UTC)
- Unfortunately, it's not true that only "those interested in the informational sciences" look at IFs. While many people agree that IFs are overrated, many more people still use it as a guide on where to submit the results of their research. Every journal editor knows that if a journal's IF goes up, so do submissions. Publishing in a journal with a high IF still makes or breaks the careers of young scientists. We can like that or loath that (I belong in the latter category), but it still is the reality. WP is not the place to change this practice, we should only report on what is being said in reliable sources. --Randykitty (talk) 12:42, 22 December 2016 (UTC)
- User:Randykitty and Masem the discussion about High IF is a complete waste of time. The criteria is having any IF. Jytdog (talk) 13:00, 22 December 2016 (UTC)
- Except that what I say about having a high IF goes even more for just having an IF. New journals face an uphill battle to get established, because they don't have an IF. A vast majority of researchers will not submit anything to a journal that is not in the JCR. But as soon as a journal gets accepted for inclusion, even though at that point the journal's first IF is not known yet, submissions will soar. The world at large obviously thinks that having an IF is very important for a journal... --Randykitty (talk) 13:16, 22 December 2016 (UTC)
- The world at large is trying to get tenure, and hiring committees look at IF in submissions. Most people think this system is badly broken. Guy (Help!) 13:35, 22 December 2016 (UTC)
- I agree. Nonetheless, the reality is that tenure and hiring committees think IFs are valuable. --Randykitty (talk) 14:29, 22 December 2016 (UTC)
- It still comes down to the fact that if you step outside academic circles, very few ppl in the world care at all about IF or the like, and academic journals rarely get called to attention unless they are an established source of major science news reported in mainstream (like Nature), or they become embroiled in bad science or a similar situation. Within the academic world, the whole issue around IF and importance and the like is not something discussed in readily-verifiable sources - it's word of mouth, debates and discussions at conferences, etc. but little to any is written down save for that small slice of informational sciences. And because IF itself is generated automatically simply from database results, its not a transformative information about a journal. Hence, IF is no way a measure of notability as defined on WP. We need secondary sources, and a high IF is not an apparent merit to assure those sources will exist. --MASEM (t) 16:11, 22 December 2016 (UTC)
- 1/ WP covers many things that are ignored or barely known outside of the circles that care about it. 2/ If you look at our article on the impact factor, you'll see many references discussing the value or lack thereof of the IF in journals like Nature or BMJ and others. Trade magazines like The Scientist or Lab Times also regularly cover this discussion. Having an IF, is the result of a journal having been selected by a committee of experts, I cannot stress that enough. Do they sometimes make mistakes? Surely. Does that invalidate their opinion? If it would, we could just as well abolish every source used on WP, because every source will make mistakes from time to time. --Randykitty (talk) 16:22, 22 December 2016 (UTC)
- The points still come down to the need for secondary sources to demonstrate notability. And that's the point that even if you consider within information science or academic circles, there is very little secondary sources about specific journals. The bulk of the information about journals are statistics like the IF, and those stats alone (alongside primary info like publisher, year started, etc.) is not sufficient for a notable wikipedia article. That's why the "high IF is always notable" is a really really really bad notability presumption. --MASEM (t) 16:27, 22 December 2016 (UTC)
- 1/ WP covers many things that are ignored or barely known outside of the circles that care about it. 2/ If you look at our article on the impact factor, you'll see many references discussing the value or lack thereof of the IF in journals like Nature or BMJ and others. Trade magazines like The Scientist or Lab Times also regularly cover this discussion. Having an IF, is the result of a journal having been selected by a committee of experts, I cannot stress that enough. Do they sometimes make mistakes? Surely. Does that invalidate their opinion? If it would, we could just as well abolish every source used on WP, because every source will make mistakes from time to time. --Randykitty (talk) 16:22, 22 December 2016 (UTC)
- It still comes down to the fact that if you step outside academic circles, very few ppl in the world care at all about IF or the like, and academic journals rarely get called to attention unless they are an established source of major science news reported in mainstream (like Nature), or they become embroiled in bad science or a similar situation. Within the academic world, the whole issue around IF and importance and the like is not something discussed in readily-verifiable sources - it's word of mouth, debates and discussions at conferences, etc. but little to any is written down save for that small slice of informational sciences. And because IF itself is generated automatically simply from database results, its not a transformative information about a journal. Hence, IF is no way a measure of notability as defined on WP. We need secondary sources, and a high IF is not an apparent merit to assure those sources will exist. --MASEM (t) 16:11, 22 December 2016 (UTC)
- Except that what I say about having a high IF goes even more for just having an IF. New journals face an uphill battle to get established, because they don't have an IF. A vast majority of researchers will not submit anything to a journal that is not in the JCR. But as soon as a journal gets accepted for inclusion, even though at that point the journal's first IF is not known yet, submissions will soar. The world at large obviously thinks that having an IF is very important for a journal... --Randykitty (talk) 13:16, 22 December 2016 (UTC)
- User:Randykitty and Masem the discussion about High IF is a complete waste of time. The criteria is having any IF. Jytdog (talk) 13:00, 22 December 2016 (UTC)
- Unfortunately, it's not true that only "those interested in the informational sciences" look at IFs. While many people agree that IFs are overrated, many more people still use it as a guide on where to submit the results of their research. Every journal editor knows that if a journal's IF goes up, so do submissions. Publishing in a journal with a high IF still makes or breaks the careers of young scientists. We can like that or loath that (I belong in the latter category), but it still is the reality. WP is not the place to change this practice, we should only report on what is being said in reliable sources. --Randykitty (talk) 12:42, 22 December 2016 (UTC)
- (edit conflict) What is very clear, is that the effort to end discussions about the appropriateness of a WIkipedia article about Explore by declaring "it has an IF and is notable - it's a rule!!!!" has drawn a lot of attention to this essay and how it is used. Jytdog (talk) 06:16, 22 December 2016 (UTC)
- That's an oversimplification. That journal is also in Index Medicus, a highly selective subset of MEDLINE, curated by specialists from the United States National Library of Medicine. --Randykitty (talk) 14:29, 22 December 2016 (UTC)
- OK, i am going to have to walk away from my computer. It is not a
fuckingoversimplification. I providedmother fuckingdiffs of onlyfuckingsome of thefucking"rule" beingmother fuckerapplied as a binarymother fuckingrule. Jytdog (talk) 15:39, 22 December 2016 (UTC) (redact Jytdog (talk) 16:33, 22 December 2016 (UTC))- Jytdog, I would greatly appreciate if you could refrain from swearing. Thanks! --Randykitty (talk) 16:22, 22 December 2016 (UTC)
- Do you understand what was so upsetting about what you wrote? Jytdog (talk) 16:33, 22 December 2016 (UTC)
- Jytdog, I would greatly appreciate if you could refrain from swearing. Thanks! --Randykitty (talk) 16:22, 22 December 2016 (UTC)
- OK, i am going to have to walk away from my computer. It is not a
- (edit conflict) Jytdog Yeah, I just noticed that. Somehow I missed that at the top of this thread. So, I stand corrected. I have never seen anybody use this as a rule before these two AfD discussions. So, since there is such a tendency I agree that it needs to be changed. In light of this, I also understand why you changed to the phrase that you had. We don't need a bright line rule - it is not worth the friction. I am endeavoring to go through the Drop Down box for the history aspect - thanks for doing this research ---Steve Quinn (talk)
- Jytdog In fact, if you want to change it back to the phrase you had, go ahead and do so. We can work on it later after the storm has passed. And I am no longer for keeping "always" in this guideline - based on this discussion ---Steve Quinn (talk) 06:38, 22 December 2016 (UTC)
- Thank you! so beautiful to have authentic discussion. from my perspective, it is OK how it is now, with stronger language but the "don't use it is as a rule" thing; it was OK before. Whatever is most useful to the regulars/members of this project, whose tool this primarily is, but helps prevent future straying into rule-using. :) Jytdog (talk) 06:45, 22 December 2016 (UTC)
- That's an oversimplification. That journal is also in Index Medicus, a highly selective subset of MEDLINE, curated by specialists from the United States National Library of Medicine. --Randykitty (talk) 14:29, 22 December 2016 (UTC)
- The point that is made is that "impact factor" is only a metric seemed significant by a very tiny minority (those interested in the informational sciences). Having a high IF has no demonstrated coorelation to being the subject of significant coverage in secondary sources, which is what notability is supposed to be built on, not a analytical value. --MASEM (t) 06:15, 22 December 2016 (UTC)
- Thanks for talking Steve Quinn. If what you mean by "useless" is "it is a not a "rule" anymore," that is indeed the point; 1) the "rule" thing is extremely unWikipedian and 2) this "rule" arose in this essay in a kind of outrageous and invalid parallel with the Nobel Prize. Do you see all that? Jytdog (talk) 05:42, 22 December 2016 (UTC)
@Jytdog: I cannot emphasize enough that you are in the minority and your edits to this guideline are against consensus. Rather than continue your derangement, just go edit elsewhere. Chris Troutman (talk) 15:53, 22 December 2016 (UTC)
- And I cannot emphasize enough that the harder this project defends this misguided essay the more likely it is that the broader community will render it historical through an MfD. Walled Gardens get torn down. Jytdog (talk) 16:14, 22 December 2016 (UTC)
- By all means! I support you using process (RfC or MfD) to resolve this. You may not, however, continue to boldly edit when you've already been reverted. Your emotional outburst above and apparent frustration lead me to believe you've gotten too wrapped up in getting your own way and I think there are other places on Wikipedia where you could contribute. Chris Troutman (talk) 16:21, 22 December 2016 (UTC)
- And I cannot emphasize enough that the harder this project defends this misguided essay the more likely it is that the broader community will render it historical through an MfD. Walled Gardens get torn down. Jytdog (talk) 16:14, 22 December 2016 (UTC)
Notable editor
FYI: An argument is being brought forward at Wikipedia:Articles for deletion/The Rutherford Journal, that NJournals should be modified to include that a journal is notable if it is peer-reviewed and edited by a notable academic. --Randykitty (talk) 00:58, 19 December 2016 (UTC)
- Notability is not inherited, period. That's a non-starter. --MASEM (t) 04:21, 19 December 2016 (UTC)
- I think there is a logical argument here, in that highly notable academics do not edit tin pot journals, and so if a journal has a prominent editor then it is likely notable, However, if it is that notable a journal then there will be more direct evidences / measures attesting to notability than the identity of the editor. So, I wouldn't use this approach. Further, in the Rutherford case, though the editor makes it to notable and has a wiki-bio, he's not in the "highly notable academics" category I mean above. EdChem (talk) 10:28, 19 December 2016 (UTC)
- Actually seems to me that at the moment he only meets C8 of WP:PROF ("The person is or has been the head or chief editor of a major, well-established academic journal in their subject area"), which makes for interesting circular reasoning... --Randykitty (talk) 10:38, 19 December 2016 (UTC)
Jack Copeland: Reviews of his book on Turing: Irish Times Notices of the American Mathematical Society (10 pages) Publishers Weekly Technology & Culture Logos doi:10.1002/asi.23705 doi:10.1002/asi.23705
- Coverage of his views by the BBC Audio interview with the Australian ABC Video lecture on Turing (60 min) Coverage from Bletchley Park (note this one lists him as FRS NZ, but he's not an FRS according to the Royal Society website) International Association for Computing And Philosophy (Covey Award) Podcast on Turing with Copeland Copeland writing in Huffington Post Copeland writing in Scientific American mentioned in connection with Imitation Game
- Reviews of Colossus: The Secrets of Bletchley Park's Codebreaking Computers: doi:10.1353/tech.2007.0112 doi:10.1080/03612759.2011.572539 Nature Review
- Turing's electronic brain: American Mathematical Society
- I think there's enough here to make an argument for Copeland's notability. Thoughts, Randykitty? EdChem (talk) 11:56, 19 December 2016 (UTC)
- Notability is not inherited. The editor may be notable, but that means nothing as far as the journal goes. Headbomb {talk / contribs / physics / books} 13:28, 19 December 2016 (UTC)
- Headbomb, the post above was for Randykitty's benefit as the notability of Copeland is not well established in his wikibio, so I was pointing out a few things that I found when I had looked earlier. I would not have posted this at the AfD discussion because it is not relevant to the retaining of the journal article, I know. :) I know notability is not inherited but there can be correlations. If an internationally renowned academic is the editor of a journal, even if I had never heard of it, I would likely infer that it was a significant journal... but it also likely follows there would be evidence to establish notability. There is logic to the suggestion that highly significant editor likely implies significant journal, but it is also true that the idea of including this in policy is unwise, especially as the argument that marginally notable editor implies journal must be kept automatically is both undesirable and a likely consequence of such a policy change. EdChem (talk) 13:37, 19 December 2016 (UTC)
- I can name at least three different journals where an editor was extremely notable, but the journal was completely ignored by the world. Rabbit holes, can be fallen into, you see. jps (talk) 21:39, 4 January 2017 (UTC)
- Headbomb, the post above was for Randykitty's benefit as the notability of Copeland is not well established in his wikibio, so I was pointing out a few things that I found when I had looked earlier. I would not have posted this at the AfD discussion because it is not relevant to the retaining of the journal article, I know. :) I know notability is not inherited but there can be correlations. If an internationally renowned academic is the editor of a journal, even if I had never heard of it, I would likely infer that it was a significant journal... but it also likely follows there would be evidence to establish notability. There is logic to the suggestion that highly significant editor likely implies significant journal, but it is also true that the idea of including this in policy is unwise, especially as the argument that marginally notable editor implies journal must be kept automatically is both undesirable and a likely consequence of such a policy change. EdChem (talk) 13:37, 19 December 2016 (UTC)
- Nope. Too many bad journals have appropriated the names of too many good academics for this proposal to fly. —David Eppstein (talk) 21:27, 5 January 2017 (UTC)
Indexing
I am afraid I take much exception to the claim that selective indexing confers notability. It can be a piece of evidence used in notability, but by itself it does not indicate notability. The essay should make this clear. It's only a piece of evidence, but there are a lot of journals that are indexed selectively which are, nevertheless, entirely non-notable (in the sense that no one pays attention to them and never has, but the publisher continues to put them out there because they receive fees from authors/organizations/etc.)
jps (talk) 21:37, 4 January 2017 (UTC)
- Nope, selectiveness confers notability. Saying no one pays attention to journals included in selective indices is a contradiction. That's the point of selective indices. I, as a researcher, do not want to be given a bunch of crap in my literature searches. That's why I make use of these services. Because they give me the non-crap, and exclude shit like The General Science Journal.
- Nothing is perfect of course, but if a journal doesn't have an impact factor, or isn't indexed in a selective database, I would think long and hard about citing something published there. Headbomb {talk / contribs / physics / books} 21:47, 4 January 2017 (UTC)
- And even less submitting something there! These databases are "selective" thanks to curation by a committee of specialists. Just as an Oscar or Grammy nomination by a committee of specialists is taken as proof for the notability of an actor, just so I don't think we should second-guess a committee of specialists selecting academic journals. --Randykitty (talk) 08:14, 5 January 2017 (UTC)
- "an Oscar or Grammy nomination by a committee of specialists is taken as proof for the notability of an actor"?! It is very rare to rare find such an absurd thing said at a notability page. But, this is a failed notability subguideline, masquerading as an essay, so I suppose that's why these rejected notions may be gasping for air here. No, a nomination is an "indicator" that the person might be notable, a rebutable presumption. The real proof of notability is where others have written about it, and the judge is AfD. "curation by a committee of specialists"? Is that a fantasy? Evidence? What is a "specialist"? jps is right. --SmokeyJoe (talk) 09:32, 5 January 2017 (UTC)
- And even less submitting something there! These databases are "selective" thanks to curation by a committee of specialists. Just as an Oscar or Grammy nomination by a committee of specialists is taken as proof for the notability of an actor, just so I don't think we should second-guess a committee of specialists selecting academic journals. --Randykitty (talk) 08:14, 5 January 2017 (UTC)
- What is a specialist? An expert in the field. What a ridiculous non-argument you're presenting. Headbomb {talk / contribs / physics / books} 09:38, 5 January 2017 (UTC)
- A specialist, with respect to the journals, or to their fields, or to fields published in some of the journal? Who are these specialists who curate databases of selected journals? In most cases, "specialist" is a buzz term of promotion without connection to any specific qualification. I'm sure these hypothetical curators would agree to being called specialists, but it doesn't mean much. Point one out and lets see whether they are an "expert". --SmokeyJoe (talk) 09:58, 5 January 2017 (UTC)
- What is a specialist? An expert in the field. What a ridiculous non-argument you're presenting. Headbomb {talk / contribs / physics / books} 09:38, 5 January 2017 (UTC)
I don't understand where this argument from Headbomb and Randykitty is coming where they've become convinced that all journals which are included in an index are notable because journals which are not included in an index are not. SmokeyJoe is right. That's simply the logical fallacy: Most A are not B therefore all NOT A are B. A journal may be notable because it is indexed, but the criteria by which indexing occurs in selective indices is not the same thing as what we would need to write an article in Wikipedia. jps (talk) 11:42, 5 January 2017 (UTC)
- I don't understand where you get that idea. Our assertion is that journals included in these databases are selected because they are notable and that journals not included will need other evidence before they can be deemed notable. I don't see any logical fallacy here. Non-included journals can be notable if there are other sources than databases (such as sources describing predatory practices, or because a journal publishes such rubbish that it gets covered elsewhere). --Randykitty (talk) 12:25, 5 January 2017 (UTC)
- YOur assertion is that journals included in these databases are selected because they are notable is extremely dubious. There are reasons to include journals in a database other than Wikipedia-notability. --SmokeyJoe (talk) 12:29, 5 January 2017 (UTC)
- Clarification: they are included because they are among the most important ones in a given field. Which is evidence of Wikipedia notability. --Randykitty (talk) 12:33, 5 January 2017 (UTC)
- That's better. But who decides which are among the most important versus of lesser importance? Selected curators? Or payment of fees? If the journal is not independent of the database that lists it, the listing is not evidence of notability. Ability to pay creates real world importance. I'm playing devils advocate here. Not listed in the database sounds like a pretty sure indicator of non-importance, but notions of database-listing implies notability fly in the face of the spirit of WP:N. it would be easier to argue inclusion without reference to "notability", along the lines of S Marshall's essay. --SmokeyJoe (talk) 12:44, 5 January 2017 (UTC)
- Clarification: they are included because they are among the most important ones in a given field. Which is evidence of Wikipedia notability. --Randykitty (talk) 12:33, 5 January 2017 (UTC)
- YOur assertion is that journals included in these databases are selected because they are notable is extremely dubious. There are reasons to include journals in a database other than Wikipedia-notability. --SmokeyJoe (talk) 12:29, 5 January 2017 (UTC)
- There's no fee involved to get included into bona fide databases (MEDLINE, Scopus, the Science Citation Index, etc). Almost all major databases are independent of the journals they cover. The one possible exception is Scopus, which is owned by Elsevier. But, whatever you may think of that publisher, they are no fools and if people would feel that their journals were getting an unfair advantage, that would rapidly diminish trust in Scopus. So they put in place mechanisms that separate Scopus from Elsevier in this regard. It's very similar to the procedures that legitimate publishers of newspapers, magazines, academic journals, etc. use to guarantee editorial freedom. "Importance" is indeed decided by selected curators. Of course those are selected on the basis of their competence in a certain field. Again, it's in the interest of the database provider to select the best curators possible, because any database will rapidly lose its credibility if it starts including stuff that is widely perceived as sub-par. --Randykitty (talk) 13:34, 5 January 2017 (UTC)
- The independence of the indices is true, but see my screed below for the bug in this program. Basically, indices leave inclusion/exclusion decisions up to a "committee of experts" who often, in the case of certain pseudoscientific areas, end up being a committee of pseudoscientists. This isn't just a problem for indices, mind you! Even so-called "Gold Standard Open Access" journals suffer from this. I have found terrible pseudoscience published by Nature publishing group in the OA journal because the editorial boards are vast and the "specialization" can be done to such an extent that you get only the credulous evaluating the credulous. This has been documented to have happened in even the best indices and the reason their reputations do not suffer too badly from this is because obscure pseudoscientific journals remain obscure whether they're indexed or not! This is primarily the reason I bristle at the suggestion that selective indices are automatic indicators of notability on Wikipedia. jps (talk) 14:05, 6 January 2017 (UTC)
- I can assure you no one considers pseudoscientific indices to be worth anything. Now it's always possible the selection process was hijacked in a corner case, but that does not invalidate the general idea. That's why we have WP:IAR, and why guidelines aren't hard rules. Headbomb {talk / contribs / physics / books} 15:13, 6 January 2017 (UTC)
- But we already know that this has occurred and the WP:IAR argument did not work at the AfD. We have an enormous list of journals that are included in selective indices in alternative medicine, some of which may be notable but many of which fly under the radar in exactly the way I am describing. When your corner cases are that many, I think there is a problem with the generalization of the rule. jps (talk) 15:50, 6 January 2017 (UTC)
- A journal being on alternative medicine does not necessarily imply it is a shit journal. These may be be journals that investigate alt med claims in a serious and scientifically valid manner. It is also possible they're included for indexing because they are known for being influential in alternative medicine, or make claims that are often cited for the purpose of debunking them in more serious journals, or whatever. Look at the first on the list, The American Journal of Chinese Medicine, I see no reason why we shouldn't have an article on that one. Likewise for Acupuncture in Medicine. Others however, seem to be clear fails of WP:NJOURNALS, like Alternative Therapies in Health and Medicine. I wouldn't take an article existing evidence that WP:NJOURNALS is what allows for its existence. It could simply be an article that fell through the cracks, and no one got to nominate it for deletion yet. Headbomb {talk / contribs / physics / books} 16:18, 6 January 2017 (UTC)
- You think Acupuncture in Medicine is notable? Then why is its impact factor in the toilet? The fact is, the editorial board is populated entirely with acupuncturists and it Edzard Ernst has let it be known that the papers it publishes are shit (check his blog and search for the journal -- he mentions a specific paper). Nevertheless, the journal itself is not discussed beyond its homepage, the index, and the citation report. It's a shitty article and we should wait for people to actually notice the journal who aren't inveterate trypanophiles like literally every single person associated with that WP:FRINGE outlet. jps (talk) 17:41, 6 January 2017 (UTC)
- Oh, and by the way? Object lesson! It seems your evaluation of the journal you thought wasn't notable passes religious test of "it's indexed therefore it's notable". Whether we can write an article or not, be damned! We can just have a permastub! Huzzah! jps (talk) 18:09, 6 January 2017 (UTC)
- It's simply not true that journals are indexed because they are notable because no index tries to find out whether it is possible to write a Wikipedia article on a journal as the criteria for indexing. jps (talk) 12:47, 5 January 2017 (UTC)
- Please see my clarification above. I should have said "important" instead of "notable". --Randykitty (talk) 13:34, 5 January 2017 (UTC)
- Okay, but right now the guideline doesn't make that clarification. Just because an index says a journal is important doesn't mean that it has the final word on Wikipedia stand-alone article notability. Since there exist some journals about which much has been written, if there is a concrete lack of reliable sources written about a journal save for the fact it has been indexed, that to me is an indication that the journal is probably best not separated out as a separate page. Why can't that be reflected in this essay? jps (talk) 16:02, 5 January 2017 (UTC)
- It's not that fact that it's indexed that makes it a notable journal, but the fact that it's indexed in a selective indices. We're quite clear that being indexed in trivial/comprehensive databases count for nothing when it comes to notability (to quote "Likewise, just because the journal is indexed in a bibliographic database does not ensure notability."). Some journals that pass WP:NJOURNALS could be merged into other entries (e.g. a journal series, or a publisher's article), we recognize that, but there is no sense in merging such entries to publishers when it comes to massive publishers like Elsevier or Wiley-Blackwell, or journals that aren't part of series. Headbomb {talk / contribs / physics / books} 17:19, 5 January 2017 (UTC)
- This discussion has been getting unwieldy, but in the interest of fairness I'll try to respond directly to this point. The question as to whether an index is selective or not is an excellent one to ask. It has the useful function of shutting down the argument that gets made that, for example, because a certain journal is found in Google Scholar, it must be notable. Google Scholar, of course, isn't really an index in the library science sense, so I am happy that this point is made clear. Yes, I applaud the essay for declaring that only selective indices should be used as evidence of notability. But I think the essay goes way too far in this point. The argument that an index is selective is just as much an editorial decision as looking at the journal itself and deciding whether it it is notable. This is just removing the step of evaluation up one level to the index rather than the journal. What we then have to do is start having arguments over indices (which are interesting discussions, but somewhat removed from the question of individual journal notability). For example, certain "selective indices" are claimed to be selective because they only include a small percentage of journals. This isn't necessarily a convincing case for selectivity because it is the criteria that the index uses which actually makes it selective. I could publish an index with a single journal in it which would be super exclusive, but that's obviously not evidence of notability. Arguing over whether a particular index, no matter how highly regarded, is doing a good job vetting the more obscure journals it lists is, I think, just exchanging one discussion for another. It really doesn't resolve the question.
- My biggest complaint, ultimately, is that massive publishers are given a pass (though it's true that they are not given carte blanche, there is a bias towards big publishers for a variety of legitimate and not-so-legitimate reasons). Especially these days, when it is known that the academic publishing model is being disrupted, we have to come to terms with the fact that many of these massive publishers are not beyond reproach, nor are the indices which claim to vet them. All else being equal, I of course will go with the Elsevier or Wiley published journal, but there are plenty of journals that these groups publish that are absolute trash because their goal of respectability is only secondary to their goal of making money. It is a sad fact that it is much easier to get your trash-journal indexed if you can convince Elsevier or Wiley or SAGE or whoever to publish your trade journal in exchange for some exorbitant printer's fee (much of the academic publishing model functions in exactly the same way as to WP:VANITY presses due to the fact that the audience is so limited). The question then gets turned around, "Is this journal well respected in its field?" Well, when your field is magic done at the top of mountains or some sort of nonsense, all you need is deep enough pockets to buy yourself respectability. When you pay Elsevier to publish your newsletter, you can lean on the weight of their reputation to impress all the rest of those who like to do magic on mountaintops into nodding their heads excitedly as to how now suddenly your marginalized field is being taken seriously. Elsevier makes money, and if the journal remains obscure and in low circulation, it suffers no penalty for publishing such trash. It can hide that low-circulation, low-impact journal deep on its lists and point out if challenged that the journal is indexed clearly because it is "respected" by the "members of their field" (which are the mountaintop magicians, you see). Meanwhile, there are plenty of stories where, for example, legitimate independent journals who bothered to take the claims of the credulous seriously end up marginalized on indices due to only the credulous being accepted as "members of their field". Now, Wikipedia cannot right great wrongs, but neither are we under an obligation to continue to propagate them either. I am all in favor of using a selective index and impact factor as evidence of notability. I cheer this project for doing so. But this cannot be the sole criteria used and right now that feels to me like what is going on. jps (talk) 12:53, 6 January 2017 (UTC)
- "Is that massive publishers are given a pass", absolutely not. We mention nothing of publishers in WP:NJOURNALS, big or small. There is no bias. Likewise, indices and which journal gets included in those indices, as Randykitty has pointed out several times, are selected by independents boards. If an Elsevier published index had a bias towards the inclusion of Elsevier journals, then the index's value would be greatly lessened, and no one would bother using it. And lastly "cannot be the sole criteria used". It is not. The criteria are "Criterion 1: The journal is considered by reliable sources to be influential in its subject area. / Criterion 2: The journal is frequently cited by other reliable sources. / Criterion 3: The journal is historically important in its subject area." Being indexed in selective indices is but one way of meeting those criteria. Headbomb {talk / contribs / physics / books} 13:43, 6 January 2017 (UTC)
- Sorry, you missed my point. It is undeniable that massive publishers are given a pass when it comes to selective indexing. Because we are biased towards selective indices, we are biased towards massive publishers. This is largely how it should be, but it causes the problem I outlined. The "independent boards" being referenced would be boards of mountaintop magicians only, and these impressionable magicians are much more likely to be impressed by an Elsevier journal than one published by a grumpy Stanford academic who is famous for criticizing mountaintop magic. So as to the three criteria, since (1) does not specify what the "subject area" is, it will necessarily include certain pseudoscientific walled gardens. Criteria 2 is basically ignored as long as an impact factor exists (at least I've seen that in AfD debates where an impact factor of 0.5 was ignored in favor of the argument that Journal Citation Reports mentions it). Criteria 3 is also subject to the pseudoscientific walled garden problem. The argument that if you manage to convince a board to index your journal you magically meet all three criteria is also precious. Basically, the rationales of the essay, which I think you are faithfully reproducing, are really rather thin and buggy. jps (talk) 16:01, 6 January 2017 (UTC)
- Your main problem is that you object to pseudoscientific journals having articles on Wikipedia. Well, they are allowed to have articles on Wikipedia. As for "where an impact factor of 0.5 was ignored in favor of the argument that Journal Citation Reports mentions it". Impact factors are assigned by Journal Citation Reports. To say they're ignoring a 0.5 impact factor because it's mentioned by JCR makes literally no sense.Headbomb {talk / contribs / physics / books} 16:25, 6 January 2017 (UTC)
- Not quite. My main problem is pseudoscientific journals have articles on Wikipedia when they haven't been noted by the relevant experts in pseudoscience (which are necessarily not the pseudoscientists themselves). The argument I am making is that people in the AfD discussions will make mention of the fact that an impact factor exists while ignoring what it is (which, it seems, is what you just did). We should be having discussions about what the impact factor of the journal is, but instead, people will say, "Oh Journal Citation Reports mentions it, so we have a reliable source!" Up until recently, this essay more-or-less argued that if the journal had an impact factor, it was notable. It still doesn't do much better. jps (talk) 17:14, 6 January 2017 (UTC)
- Your main problem is that you object to pseudoscientific journals having articles on Wikipedia. Well, they are allowed to have articles on Wikipedia. As for "where an impact factor of 0.5 was ignored in favor of the argument that Journal Citation Reports mentions it". Impact factors are assigned by Journal Citation Reports. To say they're ignoring a 0.5 impact factor because it's mentioned by JCR makes literally no sense.Headbomb {talk / contribs / physics / books} 16:25, 6 January 2017 (UTC)
- Sorry, you missed my point. It is undeniable that massive publishers are given a pass when it comes to selective indexing. Because we are biased towards selective indices, we are biased towards massive publishers. This is largely how it should be, but it causes the problem I outlined. The "independent boards" being referenced would be boards of mountaintop magicians only, and these impressionable magicians are much more likely to be impressed by an Elsevier journal than one published by a grumpy Stanford academic who is famous for criticizing mountaintop magic. So as to the three criteria, since (1) does not specify what the "subject area" is, it will necessarily include certain pseudoscientific walled gardens. Criteria 2 is basically ignored as long as an impact factor exists (at least I've seen that in AfD debates where an impact factor of 0.5 was ignored in favor of the argument that Journal Citation Reports mentions it). Criteria 3 is also subject to the pseudoscientific walled garden problem. The argument that if you manage to convince a board to index your journal you magically meet all three criteria is also precious. Basically, the rationales of the essay, which I think you are faithfully reproducing, are really rather thin and buggy. jps (talk) 16:01, 6 January 2017 (UTC)
- Two notes.
- First, I have noted this before, but RandyKitty made a great post above, Wikipedia_talk:Notability_(academic_journals)#Some_thoughts_on_why_we_need_NJournals_in_some_form_or_another, about the kinds of sources that are available about journals, and the two choices that the community had to make about journals:
- 1) Use normal RS and have very few articles about journals, or
- 2) use selective indexes and have a much larger set of articles about journals... which enables this project to serve kind of like the librarians for WP.
- It explains things really clearly. Clearly choice #2 was taken and has been acted on for seven years now. I think folks in this project should consider copying/adapting some of that section explaining this and add it to this guideline. jps if you want to try to shift the model to choice #1, it is going to take an RfC and you are going to have to grapple with the reasoning provided in that section above if you want to persuade the community to change the model.
- Second, as I also noted above, I do recommend that this project consider working to change WP:NOTDIRECTORY to create an exemption for journal articles, because the articles that are created under model #1 are directory entries. After that gets done, this essay should be updated to explain that. Jytdog (talk) 16:46, 5 January 2017 (UTC)
- This is a ridiculous false dichotomy all or nothing approach that doesn't make any sense whatsoever. We should look at each journal case-by-case and make arguments for notability. Simply having been indexed is evidence of notability, but there could be arguments against that as well. For example, if there are literally no sources which discuss the journal, we may decide that it is impossible to write a WP:NPOV article about the journal, but might be able to include it in a list of some sort. My worry is that when we have journals which fly under the radar of an indexing service (for example, when the panel of experts asked about indexing the journal only includes homeopaths for a journal on homeopathy), we will end up simply adding to the confusion by writing a permastub on the journal. It would be much better if we had a list inclusion, for example. At least then people would understand why Wikipedia is even mentioning it. jps (talk) 17:12, 5 January 2017 (UTC)
- "For example, if there are literally no sources which discuss the journal, we may decide that it is impossible to write a WP:NPOV article about the journal, but might be able to include it in a list of some sort. " If there are literally no sources, then nothing can be written about the journal, and it would fail WP:NJOURNALS quite spectacularly. If it's got an impact factor, then you have Journal Citation Reports as a source. Likewise each database serves as their own source on the journal. Plus, we can source a lot of non-controversial information (e.g. journal history, editors in chiefs, publishers, etc.) to the journal itself per WP:PRIMARY. See WP:JWG for more. Follow that, and you create articles like Acta Ophthalmologica, which you cannot possibly claim violates WP:NPOV. Headbomb {talk / contribs / physics / books} 17:25, 5 January 2017 (UTC)
- What is wrong with you? You seem to be willfully missing my point. Journal Citation Reports don't discuss journals beyond the citation report. It's a phonebook for journals that a particular group of people decided is worthy of cataloging. jps (talk) 20:46, 5 January 2017 (UTC)
- Keep at it, and you'll end at WP:ANI for the 234th time for WP:NPA. And JCR is a source for the impact factor of the journal. That is a verifiable fact supported by a WP:RS. Headbomb {talk / contribs / physics / books} 21:08, 5 January 2017 (UTC)
- Keep at it and we'll get a group of us to ask you to leave us alone here. You are the one being unhelpful. It's pretty clear. jps (talk) 21:14, 5 January 2017 (UTC)
- Keep at it, and you'll end at WP:ANI for the 234th time for WP:NPA. And JCR is a source for the impact factor of the journal. That is a verifiable fact supported by a WP:RS. Headbomb {talk / contribs / physics / books} 21:08, 5 January 2017 (UTC)
- What is wrong with you? You seem to be willfully missing my point. Journal Citation Reports don't discuss journals beyond the citation report. It's a phonebook for journals that a particular group of people decided is worthy of cataloging. jps (talk) 20:46, 5 January 2017 (UTC)
- jps I understand very well (as I think does every one else) that you want model #2 and if there are not normal RS for a journal it should go into a list article. You have made that clear. Ignoring the oddness of the sourcing situation around journals and pretending that the last seven years haven't happened are not going to help you persuade anyone. It is going to take an RFC to change NJournals this dramatically. Jytdog (talk) 17:26, 5 January 2017 (UTC)
- That approach calls for the response: This is a failed proposal masquerading as an essay. --SmokeyJoe (talk) 21:40, 5 January 2017 (UTC)
- While I don't object to model 2, I'm also not saying that this is the model we need. In fact, I don't think either model is very good. The ideal is to have a general set of rules, but not make them automatic. Sometimes there will be journals that are indexed that may not be notable enough for a standalone article. Why is that such a controversial statement? jps (talk) 21:43, 5 January 2017 (UTC)
- "For example, if there are literally no sources which discuss the journal, we may decide that it is impossible to write a WP:NPOV article about the journal, but might be able to include it in a list of some sort. " If there are literally no sources, then nothing can be written about the journal, and it would fail WP:NJOURNALS quite spectacularly. If it's got an impact factor, then you have Journal Citation Reports as a source. Likewise each database serves as their own source on the journal. Plus, we can source a lot of non-controversial information (e.g. journal history, editors in chiefs, publishers, etc.) to the journal itself per WP:PRIMARY. See WP:JWG for more. Follow that, and you create articles like Acta Ophthalmologica, which you cannot possibly claim violates WP:NPOV. Headbomb {talk / contribs / physics / books} 17:25, 5 January 2017 (UTC)
- This is a ridiculous false dichotomy all or nothing approach that doesn't make any sense whatsoever. We should look at each journal case-by-case and make arguments for notability. Simply having been indexed is evidence of notability, but there could be arguments against that as well. For example, if there are literally no sources which discuss the journal, we may decide that it is impossible to write a WP:NPOV article about the journal, but might be able to include it in a list of some sort. My worry is that when we have journals which fly under the radar of an indexing service (for example, when the panel of experts asked about indexing the journal only includes homeopaths for a journal on homeopathy), we will end up simply adding to the confusion by writing a permastub on the journal. It would be much better if we had a list inclusion, for example. At least then people would understand why Wikipedia is even mentioning it. jps (talk) 17:12, 5 January 2017 (UTC)
- This is becoming quite an unwieldy discussion. So please forgive me for not putting my comments directly behind those that I am reacting to. I am referring to WoKrKmFK3lwz8BKvaB94's comments above, asserting that deciding what databases are selective enough is an "editorial decision" just as much "as looking at the journal itself and deciding whether it it is notable". So perhaps WoKrKmFK3lwz8BKvaB94 can tell us where he gets the information that a journal is a bad/pseudoscience/fringe journal. Any reliable sources for this? In fact, just as we don't call somebody a "quack" here without having a good source for that, just so we shouldn't tag a journal as "fringe/pseudoscientific/whatever without a good source. Our personal opinion should not enter into that. And WoKrKmFK3lwz8BKvaB94 cites WP:IDIDNOTHEARTHAT vis-à-vis Headbomb, but despite the fact that it has been explained to them over and over again how really selective databases work, they keep coming up with wild assertions that homeopaths decide what homeopathic journal gets into those databases. Any sources for that assertion? Or is that just your opinion? --Randykitty (talk) 19:37, 6 January 2017 (UTC)
- Yep. I've got sources. In particular, I know people in publishing and have worked on getting a journal indexed. These sources aren't allowed on Wikipedia but, crucially, now stay with me here: we're not writing a fucking Wikipedia article on how this works. What I'm saying is pretty straightforward, the indices have a few overworked non-specialists (career publishers) who convene the boards of the "specialists" to evaluate x, y, or z. When it comes to alternative medicine, they choose alternative medicine practitioners. They aren't even apologetic about it. You can call them up and ask them directly.
- We're not dolts. We can tell when a journal is promoting pseudoscience or not. It's not difficult to tell. The articles speak for themselves. By pretending that we cannot make any determination that a journal is WP:FRINGE, you are essentially behaving indistinguishable from a FRINGE-POV-pusher. Not a very good look, if I do say so.
- jps (talk) 19:48, 6 January 2017 (UTC)
- Thanks for the clarification. I now finally get it! Headbomb, DGG, David Eppstein, myself, and several others are not qualified to identify which databases are selective, because that's just an editorial decision just as much "as looking at the journal itself and deciding whether it it is notable". That we're wrong on how these databases work can easily be seen by the fact that you "know people" and you therefore know that we are wrong. And of course we don't need reliable sources to claim that a journal is a fringe journal. We only need to take your word for it, because you know one when you see one. The rest of us are just FRINGE-POV pushers, who don't know one when we see one. Excellent logic, stupid of me not to see this clearer before! --Randykitty (talk) 08:26, 7 January 2017 (UTC)
You are just as qualified to determine the content and notability of a journal as any other Wikipedian; likewise with the selectivity of indices. My point is that by passing the buck to the indices you are pretending that you aren't making an editorial decision when, in fact, you are. jps (talk) 14:34, 7 January 2017 (UTC)
Your position, as I understand it, is that selective indexing = notability. It seems to me you adopt this position so as to not have to evaluate individual journals. I find this position to be problematic because it allows you to push for preserving journal articles that are in moribund states without having to show that it is possible to improve them. I contrast this to DGG's argument below that he could write an article for any given journal that would properly contextualize it. I have yet to see evidence of this, but at least that is an argument that is getting at the fundamental point that we are trying to WP:ENC here rather than WP:NOTDIRECTORY. jps (talk) 14:39, 7 January 2017 (UTC)
Request for DGG
But, DGG, the argument on the AfD pages is made that these fringe journals are notable for something other than being completely negative. For example, the argument is made that they are notable because they are indexed. I didn't even bother trying to write about the fact that the company that publishes the journal operates out of a strip mall in suburban Minneapolis, sponsors conferences where drinking bleach is argued to be a cure for almost every malady known to man, and articles that argue against vaccination on the basis of debunked claims are published routinely. None of these true points are admitted into the article because, in spite of all this stuff being easily discoverable through a simple google search, nobody is writing any third-party exposition on alternative medical journals. So we're left with a stub that gives the impression that this journal is just as notable as, say, Astronomy and Astrophysics. jps (talk) 17:35, 9 January 2017 (UTC)