Anda di halaman 1dari 5

Advance Computer System

Assignment # 01

Submitted by: Ali Ayub

Registration #: 17-MS-EE-19

Submitted to: Prof. Dr. Gulistan Raja

Department of Electrical Engineering


University of Engineering and Technology, Taxila
1 SPEC benchmarks:
The Standard Performance Evaluation Corporation
(SPEC) is a non-profit corporation formed to establish, maintain and endorse standardized
benchmarks and tools to evaluate performance and energy efficiency for the newest
generation of computing systems. SPEC develops benchmark suites and also reviews and
publishes submitted results from our member organizations and other benchmark licensees.

Some of the latest SPEC benchmarks are given below:

a SPEC Cloud_IaaS 2016: SPEC's first benchmark suite to measure cloud performance
SPEC Cloud_IaaS 2016's use is targeted at cloud providers, cloud consumers, hardware
vendors, virtualization software vendors, application software vendors, and academic
researchers. The benchmark addresses the performance of infrastructure-as-a-service
(IaaS) public or private cloud platforms. The benchmark is designed to stress
provisioning as well as runtime aspects of a cloud using I/O and CPU intensive cloud
computing workloads. SPEC selected the social media NoSQL database transaction and
K-Means clustering using map/reduce as two significant and representative workload
types within cloud computing.

b SPEC CPU2017: Designed to provide performance measurements that can be used to


compare compute-intensive workloads on different computer systems, SPEC CPU2017
contains 43 benchmarks organized into four suites: SPECspeed 2017 Integer, SPECspeed
2017 Floating Point, SPECrate 2017 Integer, and SPECrate 2017 Floating Point.
SPEC CPU2017 focuses on compute intensive performance, which means these
benchmarks emphasize the performance of:

 Processor - The CPU chip(s).


 Memory - The memory hierarchy, including caches and main memory.
 Compilers - C, C++, and Fortran compilers, including optimizers.

SPEC CPU2017 intentionally depends on all three of the above - not just the
processor.SPEC CPU2017 is not intended to stress other computer components such as
networking, graphics, Java libraries, or the I/O system. Note that there are other SPEC
benchmarks that focus on those areas.

c SPEC CPU2006: Designed to provide performance measurements that can be used to


compare compute-intensive workloads on different computer systems, SPEC CPU2006
contains two benchmark suites: CINT2006 for measuring and comparing compute-
intensive integer performance and CFP2006 for measuring and comparing compute-
intensive floating point performance.
d SPEC CPU2000: obsolete
e SPEC CPU95: obsolete
f SPEC CPU92: obsolete
g SPECviewperf® 12.1: The SPECviewperf 12 benchmark is the worldwide standard for
measuring graphics performance based on professional applications. The benchmark
measures the 3D graphics performance of systems running under the OpenGL and Direct
X application programming interfaces. The benchmark’s workloads, called viewsets,
represent graphics content and behavior from actual applications.
SPECviewperf 12.1 has been tested and is supported under the 64-bit version
of Microsoft Windows 7 and Windows 10. Results from SPECviewperf 12.1 remain
comparable to those from V12.0.1 and V12.0.2.
h SPECwpc V2.1: The SPECwpc benchmark measures all key aspects of workstation
performance based on diverse professional applications. The latest version is SPECwpc
2.1, which fixes an issue that caused the benchmark to crash in the Financial Services
section when running Windows 10. The change has led to higher scores for the Financial
Services composite number, so those results cannot be compared to results from
SPECwpc 2.0.
Major upgrades in the SPECwpc benchmark beginning with V2.0 include:
 Improved storage workloads that better reflect performance of NAND Flash devices
for high-capacity data storage.
 Better scalability measurement for multi-core systems and the ability to request a
specific number of threads or processes for multi-threaded workloads.
 Ability to condition drives for NAND Flash devices before performance
measurement.
 A new PTC Creo CAD/CAM workload in the product development category.
 Real-world enhancements for several workloads in the test suite.
i SPECapcSM for Maya 2017: The SPECapc® for Maya® 2017 benchmark, released on
September 27, 2017, is all-new performance evaluation software for systems running
Autodesk Maya 2017 3D animation software.
The benchmark includes nearly 80 individual tests that reflect the processes used
to model, animate and render scenes within the application. Composite scores are
provided for CPU, GPU interactive, GPU animation and GPGPU performance. There is
an option to run the benchmark at 4K resolution.
The SPECapc project group developed the benchmark in cooperation with
Autodesk, which supplied seven new graphics models and scenes. The following
workloads are included within the SPECapc for Maya 2017 benchmark:
 Sven animation – 10 copies of a character model, rigged for animation.
 Tiger – A 1.3-gigabyte realistically rendered tiger model.
 Bifrost bridge fall – A Bifrost simulation of a bridge falling into water.
 Sven space crash – A Bifrost simulation of a spaceship crashing into water.
 Sven textured model – A textured character model that measures retesselation
performance.
 Sven space animation – An action-scene animation featuring the Sven character.
 Jungle escape – Another action-scene animation featuring the Sven character.
 Toy store – A walkthrough of a city scene.
j SPECapcSM for 3ds Max™ 2015: SPECapc for 3ds Max 2015 is performance evaluation
software for systems running Autodesk 3ds Max 2015. The benchmark was introduced on
August 13, 2014. It requires that users have a working version of 3ds Max 2015 with Service Pack
1 applied.
SPECapc for 3ds Max 2015 contains 48 tests for comprehensive measurement
of modeling, interactive graphics, visual effects, CPU and GPU performance. Features
in the latest SPECapc benchmark are keyed to upgrades in 3ds Max 2015, including
new DirectX 11 shaders and vector maps, Nitrous viewport enhancements, and new
dynamics and visual effects. The benchmark also improves run-to-run consistency
and results reporting.

2 TPC benchmark:
The TPC is a non-profit corporation founded to define transaction processing and
database benchmarks and to disseminate objective, verifiable TPC performance data to the
industry.While TPC benchmarks certainly involve the measurement and evaluation of
computer functions and operations, the TPC regards a transaction as it is commonly
understood in the business world: a commercial exchange of goods, services, or money. A
typical transaction, as defined by the TPC, would include the updating to a database system
for such things as inventory control (goods), airline reservations (services), or banking
(money).

a TPC-C: TPC-C simulates a complete computing environment where a population of users


executes transactions against a database. The benchmark is centered on the principal
activities (transactions) of an order-entry environment. These transactions include
entering and delivering orders, recording payments, checking the status of orders, and
monitoring the level of stock at the warehouses. While the benchmark portrays the
activity of a wholesale supplier, TPC-C is not limited to the activity of any particular
business segment, but, rather represents any industry that must manage, sell, or
distribute a product or service.

TPC-C performance is measured in new-order transactions per minute. The


primary metrics are the transaction rate (tpmC), the associated price per transaction
($/tpmC), and the availability date of the priced configuration.

b TPC-DI: Data Integration (DI), also known as ETL, is the analysis, combination, and
transformation of data from a variety of sources and formats into a unified data model
representation. Data Integration is a key element of data warehousing, application
integration, and business analytics.
c TPC-DS: TPC-DS is the de-facto industry standard benchmark for measuring the
performance of decision support solutions including, but not limited to, Big Data
systems. The current version is v2. It models several generally applicable aspects of a
decision support system, including queries and data maintenance. Although the
underlying business model of TPC-DS is a retail product supplier, the database schema,
data population, queries, data maintenance model and implementation rules have been
designed to be broadly representative of modern decision support systems.
d TPC-E: TPC Benchmark™ E (TPC-E) is a new On-Line Transaction Processing (OLTP)
workload developed by the TPC. The TPC-E benchmark uses a database to model a
brokerage firm with customers who generate transactions related to trades, account
inquiries, and market research. The brokerage firm in turn interacts with financial
markets to execute orders on behalf of the customers and updates relevant account
information.
The benchmark is “scalable,” meaning that the number of customers defined for the
brokerage firm can be varied to represent the workloads of different-size businesses.
The benchmark defines the required mix of transactions the benchmark must maintain.
The TPC-E metric is given in transactions per second (tps). It specifically refers to the
number of Trade-Result transactions the server can sustain over a period of time.
Although the underlying business model of TPC-E is a brokerage firm, the database
schema, data population, transactions, and implementation rules have been designed to
be broadly representative of modern OLTP systems.

e TPC-Energy: The TPC-Energy Specification contains the rules and methodology for
measuring and reporting energy metric in TPC Benchmarks. This includes the energy
consumption of system components associated with typical business information
technology environments, which are characterized by:

 Energy consumption of servers


 Energy consumption of disk systems
 Energy consumption of other items that consume power and are required by the
benchmark specification.

The measuring and publishing of the TPC-Energy Metrics in the TPC Benchmarks
are optional and are not required to publish a TPC Result.

f TPC-Pricing: The TPC Pricing Subcommittee was chartered to recommend revisions to


the existing pricing methodology so that prices used in published TPC results are
verifiable. In the course of this activity, a decision was made to develop a single pricing
specification that is consistent for all TPC benchmark.

References:
1. https://www.eembc.org/benchmark/products.php
2. http://www.tpc.org/information/benchmarks.asp
3. http://www.spec.org/benchmarks.html#gwpg