Anda di halaman 1dari 5

Cristiano Ruschel Marques Dias Student ID 9572967

Learning Spike Practice


A spike is a fast less than 1 iteration long practice used to gather knowledge rather than
implementing a functionality, used either to reduce the risk of a certain project decision, to better
understand a certain technology that may be used hereafter in the project, to better understand the
scope of a user story or other similar situations. Basically instead of speculating about the answer to
a question regarding a project, spikes encourage us to solve it by conducting experiments.
Spikes can be classified as:
Functional Spikes, when the uncertainty relies on the user interaction with the
system, and
Technical Spikes, when you are dealing on how to approach the problem in the
solution domain, commonly when deciding on the use of a certain technology in the
implementation, or the impact a certain user story will have in the project.
Many times spikes can be performed instantly, to solve small questions, therefore their use
in the project does not imply in any extensive planning. However, spikes should be wrapped into a
backlog item when they have an impact in the overall flow of the project. Nonetheless, when you
create a spike you need to clearly specify the question you are trying to answer. Therefore, to create
a spike, all there is to do is specify a question to be answered, add it to the backlog if it has impact
on the project as a whole, and start it.
The best way to implement a spike generally is to build a small program or test that
demonstrates the feature in question, and that preferably generates a result that can be presented to
the team. Techniques such as mocking, automated testing and code reusing should be use to keep
the spike as short and focused on the question as possible.
This practice fits in the context of the agile values, among others, in the following ways:
The Functional Spikes help get quick feedback from the user regarding a specific
question Customer collaboration
Spikes can help developers estimate the impact of a choice before they take it,
therefore helping the project be more responsive to changes, in the sense that every
possible change will be evaluated before impacting the whole project - Responding
to change
The estimation of the impact of a user story, for example, may prevent the team of
wasting time in something that will not make into a release, therefore prioritizing
software that will work/be useful - Working software
Since spikes produce evidence that can be showed to the team, after a person or a
part of a team performs a spike, the whole team can discuss about the results Individuals and interactions
References:
Leffingweel et all. 2014. URL: http://scaledagileframework.com/spikes/
Shore, James and Warden, Shane. The Art of Agile Development. 2 November 2007.
Manske David. 2013. URL:http://davidemanske.com/scrum-spikes/

Technology description
The technology chosen as a subject of study for this coursework is the use of GPUs to create
a particle system with a large number of particles, without exchanging data with the CPU. All the
processing is expected to run on the GPU(s). A particle system is a technique in game physics and
computer graphics that uses a large number of not well defined elements to represent a fuzzy
element. An example of a simple particle system is shown below:

(Image from http://www.darwin3d.com/gamedev/gdm0798.jpg)


The programatical interfaces that will be used are OpenCL API for computing on the GPU
and OpenGL for rendering on the GPU, using the C++ language as the platform for development.
Those APIs are interoperable to some extent known to be applicable to this case. This technology
was chosen because of the importance of the use of GPUs for general processing nowadays,
especially physics simulations, and I want to analyse the scalability and feasibility of this
technology on my Advanced Computer Graphics project.
References:
Nvidia OpenCL Tutorial.2014.URL:https://developer.nvidia.com/opencl
Kolb A., Latta L., Rezk-Salama C.: Hardware-based simulation and collision detection for
large particle systems. In Graphics Hardware, pp. 123132, August 2004.

Test Harness Description


The tool that will be used for testing is Google Test given that it is from a trustworthy third
party, it has a good documentation, is based on xUnit and offers a good range of tools used in
testing, such as fixtures and mocking.
It is important to note that since the result of the application is mostly visual and there are no
tools to test specific parts of functions running on GPUs, testing will be used only to a certain
extent.
Google Test uses assertions, which are statements similar to functions which check if a
condition is true. These assertions will be used to check if the result of the computing using the
GPU is the same or roughly the same as computing using the CPU, possibly comparing the results
step by step in relation to time in the particle system. An example of a simple assertion would be:
(particle_system will be used as if it is only an array of particles, for simplification
purposes)
while(program_running)
//standard rendering loop
{
for(int i = 0; i < MAX_PARTICLES; i++)
old_particles[i] = particle_system[i];
UpdateOnGpu(particle_system);
//updates the particle system on GPU
EXPECT_EQ(old_particles,UptadeOnCpu(particle_system)) //check if the result is
the same as that when calculations are done on the CPU
}
Non-fatal assertions will be used to have more data that can help on debugging in case something
goes wrong, since the assertion will occur for every step on the system.
References:
Getting started with Google C++ Testing Framework..21 Nov
2014.URL :http://code.google.com/p/googletest/wiki/V1_7_Primer
Personal learning goals

I want to learn how to perform general computing on the GPU using the OpenCL API

I want to learn how to draw a shape defined in OpenCL using OpenGL

I want to create a simple particle system that runs completely on the GPU, without
bottlenecking on communications with the processor

Specific Technical Scenario

To learn the basics on how to perform general computing on the GPU, I will start by
implementing a simple array sum using OpenCL, expecting that it runs faster than on the
CPU and with the same result.

To prepare the process of creating the particle system, I will implement a simple
function that updates the position of a particle in an array of particles (will use data
structures available on the web to save time). The positions should be updated once per
time tick and behave according to the specified movement parameters.

To understand the interoperability of the aforementioned APIs, I will define a buffer on


OpenCL and use the data on this buffer to draw a set of points using OpenGl. The result
should be a set of points and/or geometric primitives drawn on the screen.
To test the scalability of this technology, a simple particle system with a large number of
particles will be created, and the frame rate will be measured with different amounts of
particles. I expect the system to keep a constant amount of 30 frames per second with at
least 1 million particles.

Outline Plan
Week 7 Learn how to use the selected testing harness, install all libraries and tools that will
be used in the coursework, set git repository and preparing evidence plan template (3 hours)
Week 8 Gathering knowledge and implementing the first and third specific technical
scenarios (2 hours)
Week 9 Gathering knowledge and implementing the second technical scenario (2 hours)
Week 10 Implementing the last technical scenario (3 hours)
Week 11 Gather and refine evidence, and reflect on the overall experience of the usage of
spikes, generating a written report.(3 hours)
Week 12 Week left as extra time for slippage
Evidence Plan
A git repository will be created for this project, separating each technical scenario in a
folder. The commit messages will be the main evidence, as well as the resulting code of each
scenario and screen shots of the scenarios that involve rendering. A final report will be made, with a
small dissertation over my experience and showing my impressions on it and containing the
evidences subjectively chosen as the most significant.

Anda mungkin juga menyukai