Jump to content

Central Computer and Telecommunications Agency

From Wikipedia, the free encyclopedia

Central Computer and Telecommunications Agency
AbbreviationCCTA
Formation1957 (as the TSU)
Dissolved2000 (subsumed into the OGC)
Legal statusDefunct executive government agency
PurposeNew telecommunications and computer technology for the UK government
Location
  • Rosebery Court, St Andrew's Business Park, Norwich, Norfolk, NR7 0HS, UK
Region served
UK
MembershipElectronics and computer engineers
Parent organization
HM Treasury
Websitewww.ccta.gov.uk

The Central Computer and Telecommunications Agency (CCTA) was a UK government agency providing computer and telecoms support to government departments.

History

[edit]

Formation

[edit]

Archived records

[edit]

CCTA records are held by The National Archives.[1]

In 1957, the UK government formed the Central Computer Agency (CCA) Technical Support Unit (TSU) within HM Treasury to evaluate and advise on computers, initially based around engineers from the telecommunications service. As this unit evolved, it morphed into the Central Computer and Telecommunications Agency, which also had responsibilities for procurement of United Kingdom Government technological equipment, and later, that centrally funded for University and Research Council systems.

Technical services

[edit]

Note that nearly all names and authors, quoted or referenced in this section, were CCTA engineers or scientists.

The first external technical publication was in 1960 by J. W. Freebody and J. W. Heron, as “Some engineering factors of importance in relation to the reliability of government A.D.P. systems”. Nearly 30 computer systems had been installed at that time.[2] The conclusion was that reliability was the most important single factor, identifying areas and activities that required investigation by the new organisation. A later career review confirmed that John Freebody was promoted to Staff Engineer and set the task of founding the Technical Support Unit.[3]

In 1965 responsibility for TSU was transferred from HM Treasury to the Ministry of Technology. At that time telecommunications engineering staff comprised 8 dealing with Systems Evaluations, 6 with Peripheral Equipment and 10 in the areas of Accommodation,[4] Testing, and Maintenance. Details of names, grades, qualifications, salary and relevant experience can be found in Hansard Volume 717: debated on Tuesday 27 July 1965.[5]

Technical services reliability and acceptance trials

[edit]

Procurement contracts included guaranteed service levels where, at least in the early days, was monitored by TSU engineers, to whom all fault incident occurrences and system availability levels were submitted on a monthly basis. The contracts also included requirements to run on-site and sometimes predelivery acceptance trials of a specified format, designed and supervised by engineering staff.

The acceptance tests comprised a series of demonstrations to verify that everything had been delivered and appeared to function, followed by stress testing of up to 40 hours, over a few days, depending on system size. For the latter, engineering test programs were included and available user applications. Then, the criterion of success was to achieve a given level of uptime. In 1968, new procedures were introduced, particularly involving stress testing, where each main tests were aimed to run for 15 minutes, with criteria that, besides a maximum time limit, each test was required to run failure free six times in succession.

During this period, on invitation, five CCTA engineers presented papers on acceptance testing at the Institution of Electrical Engineers.[6]

At this stage, concern was raised regarding how to test computers with the new Multiprogramming Operating Systems. The problem was solved by Roy Longbottom who, at various promotion levels between 1968 and 1982, was responsible for designing and supervising acceptance trials of the larger scientific systems. He produced 17 programs, written in the FORTRAN programming language, 5 for CPUs, 4 for disk drives, 3 for magnetic tape units and others for printers, card and paper tape punchers and readers. Program code listings are included in the book *Computer System Reliability* (Appendix 1).[7]

By 1972, 800 acceptance tests of computers systems and enhancements had been carried out including 500 for complete systems, reported in The Post Office Electrical Engineers Journal.[8] The latter tests included 100 using the new procedures from 11 different contractors. The first candidate was an IBM 360 Model 65 at University College London in 1971, then in 1972 by trials on all mainframes, minicomputers and supercomputers covered by CCTA contracts. Later that year, top end systems tested were the $5 million scalar supercomputers CDC 7600 at University of London Computer Centre and IBM 360/195 at UK Meteorological Office. Not included in these 100, but significant, 1973 trials included the Atlas (computer) at Cambridge University, a latter day version of the 1962 UK supercomputer. During the 100 trials, 23 systems failed to meet the specified criteria, at the first attempt.

By 1979 more than 1600 acceptance tests of computers systems and enhancements had been carried out. For the latest 400 system tests, carried out under the new procedures, 14% were recorded as failures and 24% as having a conditional pass. Up to three attempts were allowed with none being completely rejected, albeit some accepted with penalty conditions. See Chapter 10 in the Longbottom book.[7]

Detailed analysis of fault returns, hands on observations during acceptance trials and system appraisal activities lead to a deeper understanding of reliability issues, published in a 1972 Radio and Electronic Engineer Journal, titled “Analysis of Computer System Reliability and Maintainability”, with probability considerations.[9] Later, came a conference paper “Reliability of Computer Systems” (Archive) [10] and the Roy Longbottom book [7] that particularly acknowledges input provided by Ian Thomson on computer system maintainability and Trevor Jones on environmental aspects.

Trials in 1979 included the first Cray 1 vector supercomputer to be delivered to the UK at Atomic Weapons Research Establishment and, by 1982, the CDC Cyber 205 for UK Meteorological Office, where total system costs could be $10 million. Both these systems had pre-delivery trials in the USA. For these, Roy Longbottom converted the scalar CPU programs to fully exploit capabilities of the new vector processors. Results of the converted Vector Whetstone (benchmark) were included in the paper “Performance of Multi-User Supercomputing Facilities” presented in the 1989 Fourth International Conference on Supercomputing, Santa Clara.[11][12]

Details were also included in the June 1990 Advanced Computing Seminar at Natural Environment Research Council Wallingford. This led to Council for the Central Laboratory of the Research Councils Distributed Computing Support collecting results from running “on a variety of machines, including vector supercomputers, minisupers, super-workstations and workstations, together with that obtained on a number of vector CPUs and on single nodes of various MPP machines “. More than 200 results are included, up to 2006, in the report available on the Wayback Machine Archive in entries to at least the year 2007 section.[13]

For the systems identified as supercomputers, there were nine acceptance testing sessions, two of which were failures, one due to excessive CPU problems and the other due to design issues on the I/O subsystem. Both of these were induced by the CCTA stress testing programs.

Technical services system appraisal

[edit]

During the early days there were considerations of future technology, including telecommunications in the 1970 book “Data transmission - the future : the development of data transmission to meet future users' needs” [14] found in National Library of Australia catalog 169638. But the main emphasis was appraisal of the latest computer system hardware and software. Initially, this involved collecting information on all appropriate new products, followed by more detailed investigation when being considered for a new project. This included a tour of the production factory and discussions with higher level engineering, design and quality control staff.

The National Archives CCTA records [1] include technical appraisal reports (at the time of writing), up to 1986 (search for quoted reports). The first in a finally standard format was “System Summary Notes” (range 5000 to 6999), starting in 1967, with such as early IBM 360 mainframes and Digital Equipment Corporation PDP 8 minicomputer, up to the last issue in 1980. These are based on standard forms with numerous entries. Other reports identified in the Archives are “Technical Notes” between 1975 and 1986, “Internal Technical Memoranda” 1973 to 1986 and “Technical Memoranda”1975 to 1986”. The number of reports cannot be easily determined from the provided data..

Computer system performance

[edit]

Before cross the board standard benchmarks became available, average speed rating of computers was based on calculations for a mix of instructions with the result given in Kilo Instructions Per Second (KIPS). The most famous was the Gibson Mix for scientific computing. This was included in CCTA calculations that included those for an ADP Mix and a Process Control Mix, in CCTA Technical Note 3806 Issue 5 with 212 sets of results from 18 manufacturers, pre- 1960 to 1971. In 1977, later results were included in CCTA Technical Memorandum 1163, (both via [1]). All those results are also available in a 2017 PDF file.[15]

In 1972 Harold Curnow wrote the Whetstone Benchmark in the FORTRAN programming Language, based on the work of Brian Wichmann of the National Physical Laboratory.[16] This executes 8 test functions, 5 of which involve floating point calculations that dominate running time. Overall performance was calculated in thousands of Whetstone instructions per second (KWIPS). The program became the first general purpose benchmark that set industry standards of computer system performance. Enhancements by Roy Longbottom provided self timing arrangements and calibration to run for a predetermined time on present and future systems, also performance of each of the 8 tests. The calibrated time was mainly for 10 seconds and is still applicable after 50 years.

In 1978, Roy Longbottom, who inherited the role of design authority ot the benchmark, also produced a version to exploit supercomputer processing hardware, covered in reports “Performance of Multi-User Supercomputing Facilities” [11] and “Whither Whetstone? The synthetic benchmark after 15 years” [17] in book.[18]

Original Whetstone Benchmark results are in 1985 CCTA Technical Memorandum 1182, (via Archive [1]). where overall speed is shown as MWIPS (Millions). This contains more than 1000 results for 244 computers from 32 manufacturers.

On achieving 1 MWIPS, the Digital Equipment Corporation VAX-11/780 minicomputer became accepted as the first commercially available 32-bit computer to demonstrate 1 MIPS (Millions of Instructions Per Second), CERN,[19] not really appropriate for a benchmark dependent on floating point speed. This had an impact on the Dhrystone Benchmark, the second accepted general purpose computer performance measurement program, with no floating point calculations. This produced a result of 1757 Dhrystones Per Second on the VAX 11/780, leading to a revised measurement of 1 DMIPS, (AKA Vax MIPS), by dividing the original result by 1757.

The Whetstone Benchmark also had high visibility concerning floating point performance of Intel CPUs and PCs, starting with the 1980 Intel 8087 coprocessor. This was reported in the 1986 Intel Application Report “High Speed Numerics with the 80186/80188 and 8087”.[20] The latter includes hardware functions for exponential, logarithmic or trigonometric calculations, as used in two of the eight Whetstone Benchmark tests, where these can dominate running time. Only two other benchmarks were included in the Intel procedures, showing huge gains over the earlier software based routines on all three programs.

Later tests, by a SSEMC Laboratory, evaluated Intel 80486 compatible CPU chips using their Universal Chip Analyzer.[21] Considering two floating point benchmarks, as used by Intel in the above report, they preferred Whetstone, stating “ Whetstone utilizes the complete set of instructions available on early x87 FPUs”. This might suggest that the Whetstone Benchmark influenced the hardware instruction set.

CCTA also influenced the programming code for Linpack and Livermore loops floating point benchmarks, initially for PC versions, where the original programs were unsuitable, particularly due to the PC low resolution timer. The new versions, in the C programming language, included the new CCTA automatic calibration function to run for a specified finite time, still applicable 50 years later. Netlib accepted the former, renaming it as linpack-pc.c.[22] For the Livermore benchmark, C programming code was available for executing the loops but extensive background code, for such as data generation, timing parameters and numeric results validation, were in FORTRAN. This was converted to C. At least one other organisation has published a claimed completely rewritten C version that incorporates the CCTA unique background code. with no attribution.

CCTA test programs used in acceptance trials had parameters to control running times, enabling valid comparisons of CPU performance of all systems tested. Following a request for information, these and Whetstone Benchmark results were included in the external publication “A Guide to the Processing Speeds of Computers”, over 100 different computers with more than 700 results.[23] This included the acknowledgment “The authors would like to thank colleagues from the Central Computer Agency, namely Mr G Brownlee, Mr H J Curnow and Mr R Longbottom who have helped to collect much of the data making this system possible”.

From 1980 Roy Longbottom spent most of his time providing performance consultancy services to Departments and Universities. The latter included attending meetings of the Computer Board for Universities and Research Councils National Archives.[24] He became a member of the Technical Subgroup of the National Policy Committee on Advanced Research Computers and the Universities’ Benchmark Options Group. The latter involved leading a party to the USA including having discussions with Jack Dongarra and Frank McMahon, respectively authors of the Linpack and Livermore Loops, key benchmarks of the day for scientific applications.

In 1992, the Science and Engineering Research Council requested CCTA to provide independent observation and reporting on benchmarking a new supercomputer for University of London Computer Centre, comprising a large sample of typical user applications. Roy Longbottom covered Fujitsu and NEC computers in Japan and Rob Whetnall overseeing Cray and Convex Computer Corporation systems, in the USA. The CCTA scalar and vector Whetstone Benchmarks were also run. A combination of the latter can help in evaluating performance of multi-user supercomputing operation,[11] where the one that can demonstrate superior performance on specific applications is not necessarily the best choice and the level of vectorisation and number of scalar processors can be more important. In this case, calculations from results of the CCTA programs indicated the same choice of system as that from the university's benchmark.

The aforementioned performance consultancy covered more than 45 projects between 1990 and 1993, mainly for data processing applications, with systems from 18 manufacturers, including mainframes, minicomputers and PCs. Activities included detailed sizing, modelling, user application based benchmarking, general advice and troubleshooting. CCTA's work was publicised at various conferences, starting with one on in-house software for benchmarking and capacity planning at ECOMA 12 in Munich, 1984,[25] then benchmarking and workload characterisation at Edinburgh University, 1986 (Page 5).[26]

The next one, on Database System Benchmarks and Performance Testing was in a Conference on Parallel Processors, at NPL in 1992, providing a warning of the dangers for the supercomputer community, and published in a later book.[27]

Finally, a new approach to performance management was suggested based on the assumption that initial sizing estimates would be incorrect and actions should be considered for application at each stage of procurement, presented at UKCMG Conference Brighton, in 1992.[28] It was proposed following performance issues on a number of new small systems using the UNIX operating system. In this case, the reasons were identified by measuring CPU, input/output, communications and memory utilisation of a number of transactions, using the UNIX SAR performance monitor. Then the first problems was mainly transactions using too much CPU time, requiring more efficient code or a CPU upgrade. Secondly it was the single disk drive, with adequate capacity, being unable to handle the high random access rate, the solution being to spread the data over more than one drive. To help in identifying solutions or “what if” considerations, a sizing model "A Spreadsheet Computer Performance Queuing Model for Finite User Populations" was produced, to instantly indicate the likely impact of changes in response times, throughput and hardware utilisation.[29]

Other data processing benchmarks produced by CCTA Performance Branch included one measuring performance of mixes of processor bound activities, written in the COBOL programming language. A total of 129 sets of results over computers from 22 different manufacturers are in Internal Memo 5219. A second one is the Medium System Benchmark, with limited results in Internal Memo 5365 covering 35 systems from 8 manufacturers. This also indicates Technical Memoranda numbers of reports containing full results, in the range 15047 to 15247 (example ICL reports are 15147/1 to 15147/14) - see Archived Information for quoted reports.[1] The benchmark comprised six real representative programs with disk and magnetic tape input/output, covering updates, sorting, compiling and multi-stream operation, measuring CPU and elapsed times and the number of data transfers.

CCTA computer benchmarking and testing legacy

[edit]

After retirement, Roy Longbottom, as the latter day design authority of the Whetstone Benchmark, converted the latest FORTRAN code into the C programming language, also creating a new series of benchmarks and stress testing programs based on previous CCTA activities. These were freely available, produced in conjunction with the Compuserve Benchmarks and Standards Forum, see Wayback Machine Archive,[30] covering PC hardware 1997 to 2008.

Later, with further development, programs and results were made freely available in a dedicated website (that will have a limited lifetime). Historic details from 2008 onwards are in Wayback Machine Archive,[31] where all files appear to be downloadable from most impressions. From 2017 onwards, the details were made available at ResearchGate in more referenceable PDF files. In 2024 there were 40 of these reports to read or download, when a total of more than 76,000 Reads and 79 Citations were reported. Brief descriptions of all files are included in an indexing file [32] (Download to open files). The PDF files include 12 for Raspberry Pi computers, for which Roy Longbottom had been recruited by the Raspberry Pi Foundation as a voluntary member of Raspberry Pi pre-release Alpha Testing Team from 2019.

By the 1990s the Whetstone Benchmark and results had become relatively popular. A notable quotation in 1985 was in “A portable seismic computing benchmark” quoting "The only commonly used benchmark to my knowledge is the venerable Whetstone benchmark, designed many years ago to test floating point operations" in the European Association of Geoscientists and Engineers Journal.[33]

Then there was great interest in historic performance. Unlike the other Classic Benchmarks, Dhrystone, Linpack and Livermore loops, Whetstone result tables were not available in the public domain but, (in honour of CCTA, for this and other publications), was rectified from 2017. The first new report was “Computer Speeds From Instruction Mixes pre-1960 to 1971”.[15] As with the following one, identified year of first delivery and purchase prices were added.

The second was “Whetstone Benchmark History and Results”,[34] with more detail and added results, particularly for PCs, up to 2013, and double the number of computers covered. The most notable citation, for this and Gibson Mix, was by Tony Voellm, then Google Cloud Performance Engineering Manager, entitled “Cloud Benchmarking: Fight the black hole”.[35] This considered available benchmarks and performance by time with detailed graphs, including those from the Mix and Whetstone reports.

The first of other reports, attributable to earlier CCTA gained knowledge but not previously published, is “Computer Speed Claims 1980 to 1996”.[36] This covers more than 2000 mainframes, minicomputers, supercomputers and workstations, from around 120 suppliers, with main speeds in Millions of Instructions Per Second (MIPS), Millions of Floating Point Operations Per Second (MFLOPS) and CPU clock speed in MHz. Cost and production year are also included, when available.

Next, based on programming in Intel 8086 assembly code, learned earlier, is “PC CPUID 1994 to 2013, plus Measured Maximum Speeds Via Assembler Code.[37] This contains 27 pages of PC CPU identification numbers, operating speeds, range of models and cache sizes, by year, then performance of more than 30 types of processor over 12 CPU and memory benchmarks. Separate performance comparison tables are provided for handling data provided the CPU, caches and RAM. The diversity of results demonstrates the useless of general performance comparisons based on a single number.

The following reports highlight earlier unique CCTA experiences, without which they could not have been produced. The first is “Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets”.[38] Results are initially based on the Classic Benchmarks that were the first programs that set standards of performance for scientific computing, comprising the 1970 Livermore Loops, the 1972 Whetstone and the 1979 Linpack 100 benchmarks. Further results cover the 1979 Vector Whetstone performance, high speed floating point calculations and multiprocessing. The report includes the following comparison with the first version of the Raspberry Pi computer based on average Livermore Loop speeds, as this benchmark was used to verify performance of the first Cray 1.

"In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1".

The later Pi 400 PC is shown to be 78.8 times faster and that could increase up to four times, using all CPU cores.

That quotation was reproduced in numerous Internet posts, some including a reference to the author worked for “the UK Government Central Computer Agency”, as quoted in the report. A total of more than 60 posts were found across LinkedIn, X (Twitter) and Facebook, with more than 30 thousand views. This was based on an HTML version of the comparisons on the author's website (Archive Copy),[39] where site Analytics registered almost 190,000 HTML file views between December 2023 and January 2024, with nearly 90% for the Cray report. Accesses were from North America 47%, Europe 37%, Asia 11%, Oceania 3% and Other 2%, the Agency's involvement being spread around the world.

CCTA influence is also highlighted in “Celebrating 50 years of computer benchmarking and stress testing”.[40]

IS/IT strategies

[edit]

In this area, CCTA's work during the 1970s, 1980s and 1990s was primarily to (a) develop central government IT professionalism, (b) create a body of knowledge and experience in the successful development and implementation of IS/IT within UK central government (c) to brief Government Ministers on the opportunities for use of IS/IT to support policy initiatives (e.g. "Citizen's Charter" / "e-government") and (d) to encourage and assist UK private sector companies to develop and offer products and services aligned to government needs.

Over the 3 decades, CCTA's focus shifted from hardware to a business oriented systems approach with strong emphasis on business led IS/IT Strategies which crossed Departmental (Ministry) boundaries encompassing several "Departments" (e.g. CCCJS – Computerisation of the Central Criminal Justice System). This inter-departmental approach (first mooted in the mid to late 1980s) was revolutionary and met considerable political and departmental opposition.

In October 1994, MI5 took over its work on computer security from hacking into the government's (usually the Treasury) network. In November 1994, CCTA launched its website. In February 1998 it built and ran the government's secure intranet. The MoD was connected to a separate network. In December 1998, the DfEE moved its server from CCTA at Norwich to NISS (National Information Services and Systems) in Bath when it relaunched its website.[41]

Between 1989 and 1992, CCTA's "Strategic Programmes" Division undertook research on exploiting Information Systems as a medium for improving the relationship between citizens, businesses and government. This parallelled the launch of the "Citizen's Charter" by the then Prime Minister, John Major, and the creation within the Cabinet Office of the "Citizen's Charter Unit" (CCTA had at this point been moved from HM Treasury to the Cabinet Office). The research and work focused on identifying ways of simplifying the interaction between citizens and government through the use of IS/IT. Two major TV documentaries were produced by CCTA – "Information and the Citizen" and "Hymns Ancient and Modern" which explored the business and political issues associated with what was to become "e-government". These were aimed at widening the understanding of senior civil servants (the Whitehall Mandarins) of the significant impact of the "Information Age" and identifying wider social and economic issues likely to arise from e-government.[citation needed]

Merger

[edit]

During the late 1990s, its strategic role was eroded by the Cabinet Office's Central IT Unit (CITU – created by Michael Heseltine in November 1995), and in 2000 CCTA was fully subsumed into the Office of Government Commerce (OGC).[42]

Successors

[edit]

Since then, the non-procurement IT / Telecommunications co-ordination role has remained in the Cabinet Office, under a number of successive guises:

Activities

[edit]

CCTA was the sponsor of a number of methodologies, including:

The CCTA Security Group created the first UK Government National Information Security Policy, and developed the early approaches to structured information security for commercial organisations which saw wider use in the DTI Security Code of Practice, BS 7799 and eventually ISO/IEC 27000

CCTA also promoted the use of emerging IT standards in UK government and in the EU, such as OSI and BS5750 (Quality Management) which led to the publishing of the Quality Management Library and the inception of the TickIT assessment scheme with DTI, MOD and participation of software development companies.

In addition to the development of methodologies, CCTA produced a comprehensive set of managerial guidance covering the development of Information Systems under 5 major headings: A. – Management and Planning of IS; B. – Systems Development; C. – Service Management; D – Office Users; E. – IS Services Industry. The guidance consisted of 27 individual guides and were published commercially as "The Information Systems Guides" (ISBN 0-471-92556-X) by John Wiley and Sons. The publication is no longer available. This guidance was developed from the practical experience and lessons learned from many UK Government Departments in planning, designing, implementing and monitoring Information Systems and was highly regarded as "best practice". Some parts were translated into other European languages and adopted as national standards.

It also was involved in technical developments, for instance as the sponsor of Project SPACE in the mid-1980s. Under Project SPACE, the ICL Defence Technology Centre (DTC), working closely with technical staff from CCTA and key security-intensive projects in the Ministry of Defence (such as OPCON CCIS) and in other sensitive departments, developed an enhanced security variant of VME.

It managed (ran the servers) of UK national government websites, including those such as the Royal Family's and www.open.gov.uk.

Structure

[edit]

CCTA's headquarters were in London at Riverwalk House, Vauxhall Bridge Road, SW1, since used by the Government Office for London. This housed the main divisions with a satellite office in Norwich which focused on IS/IT Procurement – a function which had been taken over from HMSO (the Stationery Office) when CCTA was formed.

The office in Norwich was in the east of the city, off the former A47 (now A1042), just west of the present A47 interchange near the former St Andrew's Hospital. The site is now used by the OGC.

The HQ in London had four divisions:

  • Project support – major IT programmes – software engineering
  • Specialist support – evaluation of individual items of hardware and software
  • Strategic Planning and Promotion – project management and office technology (hardware and office automation)
  • Advance Technology – telecommunications and advanced technology (latest generation of computers)

References

[edit]
  1. ^ a b c d e "CCTA Archive". National Archives. Retrieved 21 June 2024.
  2. ^ Freebody, J.W.; Heron, K.M. (January 1960). "Some engineering factors of importance in relation to the reliability of government A.D.P. systems". Institution of Electrical Engineers and the British Computer Society Ltd.
  3. ^ Freebody, J.W. (July 1966). "Notes and Comments" (PDF). The Post Office Electrical Engineers Journal: 135. Retrieved 31 May 2024.
  4. ^ Stephenson, M.; Fiddes, R.G. (April 1964). "Air-Conditioning in Computer Accommodation" (PDF). The Post Office Electrical Engineers Journal. Retrieved 19 May 2024.
  5. ^ "Hansard 1". UK Parliament. Retrieved 19 May 2024.
  6. ^ Iles, S.H.; Longbottom, R.; Thomson, A.M.M.; Skinner, P.J.; Clarke, J. (December 1971). "Colloquium on the specification of acceptance testing of control computers for on-line applications".
  7. ^ a b c Longbottom, Roy (1980). Computer System Reliability. Wiley. ISBN 0-471-27634-0.
  8. ^ Longbottom, R.; Stoate, K.W. (July 1972). "Acceptance Trials of Digital Computer Systems" (PDF). The Post Office Electrical Engineers Journal: 91. Retrieved 20 May 2024.
  9. ^ Longbottom, Roy (December 1972). "Analysis of Computer System Reliability and Maintainability". Radio and Electronic Engineer. 42 (12): 537. doi:10.1049/ree.1972.0092. Retrieved 20 May 2024.
  10. ^ Longbottom, R. Reliability of Computer Systems. ECOMA-10; Munich 1982. Retrieved 20 May 2024.
  11. ^ a b c Longbottom, Roy (1989). "Google Scholar References". p. 30. Retrieved 18 May 2024.
  12. ^ Proceedings, Fourth International Conference on Supercomputing and Third World Supercomputer Exhibition. Santa Clara Convention Center, Santa Clara, CA, USA: International Supercomputing Institute. April 1989. Retrieved 18 May 2024.
  13. ^ "The CCLRC Vector Whetstone Benchmark Results Upt To 2006". Retrieved 10 June 2024.
  14. ^ Thomson, A.M.M.; Fiddes, R.G. (1970). Data transmission - the future : the development of data transmission to meet future users' needs. National Library of Australia. Retrieved 22 May 2024.{{cite book}}: CS1 maint: location missing publisher (link)
  15. ^ a b Longbottom, Roy (2017). "Computer Speeds From Instruction Mixes pre-1960 to 1971". researchgate.net. doi:10.13140/RG.2.2.14182.93765. Retrieved 23 April 2024.
  16. ^ Curnow, H.J.; Wichmann, B.A. (1976). "A Synthetic Benchmark". The Computer Journal. 19: 43–49. doi:10.1093/comjnl/19.1.43. Retrieved 24 May 2024.
  17. ^ Curnow, H.J. (September 1990). Whither Whetstone? The synthetic benchmark after 15 years. Chapman & Hall. pp. 260–266. ISBN 978-0-442-31198-8. Retrieved 26 May 2024.
  18. ^ Evaluating supercomputers: strategies for exploiting, evaluating and benchmarking computers with advanced architectures. Chapman & Hall, Ltd. United Kingdom. 1990. ASIN 0412378604.
  19. ^ "CERN-OBJ-IT-025 Computing and computers Model of the VAX-11/780". Retrieved 26 May 2024.
  20. ^ "High Speed Numerics with the 80186/80188 and 8087" (PDF). 1986. Retrieved 10 July 2024.
  21. ^ "Investigating SSMEC's (State Micro) 486s with the UCA". 25 June 2024. Retrieved 10 July 2024.
  22. ^ "Linpack 100 Benchmark for PC Systems". Netlib. Retrieved 8 June 2024.
  23. ^ Nott, C.W.; Wichmann, B.A. (1977). "A Guide to the Processing Speeds of Computers". Retrieved 27 May 2024.
  24. ^ "The Computer Board for Universities and Research Councils". National Archives. Retrieved 27 May 2024.
  25. ^ Longbottom, R. Performance Test Harness for Benchmarking and Capacity Planning of On-Line Systems. ECOMA-12; Munich 1984.
  26. ^ Longbottom, R. Benchmarking and Workload Characterisation. Second Computer and Telecommunications Engineering Workshop. University of Edinburgh. September 1986. doi:10.1145/32100. Retrieved 28 May 2024.
  27. ^ Dongarra, J.J.; Gentzsch, W. (1993). Computer Benchmarks. Elsevier Science Publishers. p. 339. Retrieved 28 May 2024.
  28. ^ Longbottom, R. A Performance Management Methodology for IS Procurement. UKCMG International Conference; Management and Performance Evaluation of Computer Systems, May 1992. Brighton.
  29. ^ Longbottom, Roy (October 2017). "A Spreadsheet Computer Performance Queuing Model for Finite User Populations.pdf". researchgate.net. doi:10.13140/RG.2.2.18376.01280. Retrieved 3 May 2024.
  30. ^ "Compuserve Benchmarks and Standards Forum". Wayback Machine Archive. Archived from the original on 6 December 2008. Retrieved 4 June 2024.
  31. ^ "Archive roylongbottom.org.uk". Wayback Machine Archive. Retrieved 4 June 2024.
  32. ^ Longbottom, Roy. "Computer Benchmarks and Stress Tests and Performance History Index". Retrieved 4 June 2024.
  33. ^ Hatton, H. (1 August 1985). "A portable seismic computing benchmark". European Association of Geoscientists and Engineers Journal. 3 (8). doi:10.3997/1365-2397.1985016. Retrieved 5 June 2024.
  34. ^ Longbottom, Roy (July 2017). "Whetstone Benchmark History and Results". researchgate.net. doi:10.13140/RG.2.2.26267.77603. Retrieved 5 June 2024.
  35. ^ Voellm, A.F. (September 2013). "Cloud Benchmarking: Fight the black hole" (PDF). Retrieved 5 June 2024.
  36. ^ Longbottom, Roy (July 2017). "Computer Speed Claims 1980 to 1996". researchgate.net. doi:10.13140/RG.2.2.22571.54569. Retrieved 6 June 2024.
  37. ^ Longbottom, Roy (July 2017). "PC CPUID 1994 to 2013, plus Measured Maximum Speeds Via Assembler Code" (PDF). researchgate.net. doi:10.13140/RG.2.2.12519.14247. Retrieved 6 June 2024. {{cite journal}}: Check |url= value (help)
  38. ^ Longbottom, Roy (March 2022). "Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets". doi:10.13140/RG.2.2.27437.56804. Retrieved 6 June 2024.
  39. ^ Longbottom, Roy (March 2022). "Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets". Wayback Machine. Wayback Machine Archive. Archived from the original on 7 April 2024. Retrieved 16 June 2024.
  40. ^ Longbottom, Roy (September 2022). "Celebrating 50 years of computer benchmarking and stress testing". researchgate.net. Retrieved 16 June 2024.
  41. ^ "BBC News | Education | The virtual education department". BBC News. Retrieved 13 October 2023.
  42. ^ Office of Government Commerce Open for Business – OGC press release. Retrieved 28 August 2007
  43. ^ Government Digital Service. Retrieved 4 January 2014
  44. ^ History of CRAMM Archived 28 April 2008 at the Wayback Machine