CONCRETE MIX DESIGN USING
ARTIFICIAL NEURAL NETWORK
A
THESIS
Submitted
in partial fulfillment of the requirements
for the award of degree of
MASTER OF ENGINEERING
IN
CIVIL ENGINEERING (STRUCTURES)
By
RISHI GARG
Roll No. 8002309
Under the guidance of
Dr. NAVEEN KWATRA
DEPARTMENT OF CIVIL ENGIEERING
THAPAR INSTITUTE OF ENGINEERING & TECHNOLOGY
(Deemed University)
PATIALA – 147004
June - 2003
CERTIFICATE
This is to certify that the work presented in thesis entitled, “Concrete mix
design using artificial neural network” submitted by Mr. Rishi Garg (Roll no.
8002309) in partial fulfillment of requirements for the award of degree of
MASTERS OF ENGINEERING IN CIVIL (Structures) ENGINEERING of
Thapar Institute of Engineering and Technology, (Deemed University), Patiala,
is an authentic record of students own work carried out under our supervision
and guidance. The matter presented in the thesis has reached the standards
fulfilling the requirements of the regulation for the award of said degree.
(Dr. Naveen Kwatra)
Lecturer,
Department of Civil Engineering
Thapar Institute of Engineering and Technology,
(Deemed University), Patiala – 147004
(Dr. A.Trivedi) (Dr. D.S. Bawa)
Head of Civil Engg. Deptt. Dean of AcademicAffairs
Thapar Institute of Thapar Institute of
Engineering and Technology, Engineering and Technology,
(Deemed University), Patiala (Deemed University), Patiala
ACKNOWLEDGEMENT
I gratefully acknowledge with profound sense of gratitude my sincere thanks to
Dr. Naveen Kwatra, Lecturer of Civil Engineering, T.I.E.T., Patiala for his
inspiration persistent encouragement and invaluable guidance throughout the
preparation of thesis.
I acknowledge with thanks the support provided by Mr. Maneek Kumar, Asstt.
Prof. C.E.D., Ms. Shweta, Mr. Om Parkash, and Mr. Varinder Kanwar.
I wish to record my sincere thanks to the Head of Department, Dr. M.L.
Gambhir, T.I.E.T. Patiala, who allowed me to use the institutional facilities to
carry out my research.
I also owe thanks to my parents and friends for helping me in preparing the
thesis
Date: Rishi Garg
Place: Patiala Roll No. 8002309
CONTENTS
CHAPTER 1 INTRODUCTION
1.1 GENERAL
1.2 PROBLEM IDENTIFICATION AND OBJECTIVES
1.3 SCOPE OF THE WORK
1.4 ORGANISATION OF THE THESIS
CHAPTER 2 LITERATURE REVIEW
CHAPTER 3 CONCRETE MIX DESIGN
3.1 GENERAL
3.2 PRINCIPLES OF CONCRETE MIX DESIGN
3.3 INGREDIENTS OF CONCRETE MIX
3.3.1 CEMENT
3.3.2 AGGREGATES
3.3.2.1 COARSE AGGREGATES
3.3.2.2 FINE AGGREGATES
3.3.3 WATER
3.4 DESIGN METHODS
3.5 ACI METHOD OF MIX PROPORTIONING
CHAPTER 4 ARTIFICIAL NEURAL NETWORKS
4.1 GENERAL
4.2 INTRODUCTION TO NEURAL NETWORK
4.2.1 The Biological Neuron
4.2.2 The Artificial Neuron
4.2.3 The Artificial Neural Network
4.2.4 Types of ANN
4.2.5 Learning of ANN
4.3 BACK PROPAGATION LEARNING NETWORK
4.3.1 Introduction
4.3.2 Feed Forward Computation
4.3.3 Error Back Propagation
4.3.3.1Selection of learning rate parameter
4.3.3.2 Momentum Constant
4.3.3.3 Pattern Parameter & Weight
Adjustment
4.3.3.4 Selection of initial weights
4.3.3.5 Problem of local minima
4.3.3.6 Normalisation of Training Data
Set
CHAPTER 5 RESULTS & DISCUSSIONS
5.1 EXPERIMENTAL PROGRAM
5.2 EFFECT OF FINENESS MODULUS
5.3 EFFECT OF SIZE OF COARSE AGGREGATE
5.4 EFFECT OF WATER CEMENT RATIO
5.5 ANN MODELLING OF CONCRETE MIX
DESIGN
5.5.1 Selection of Learning Rate
Parameter and Momentum Constant
5.5.2 Architecture of Network
5.5.3 Selection of Training Data Set
5.5.4 Testing of the Network
5.5.5 Comparison of Experimental &
Predicted Data
CHAPTER 1
INTRODUCTION
1.1 GENERAL
Concrete is the most widely used construction material because of its
flowability in most complicated form i.e. its ability to take any shape
while wet, and its strength development characteristics when it hardens.
Generally concrete is used to build protective structures, which are
subjected to several extreme stress conditions. Concrete is the most
widely used construction material manufactured at the site. This
composite material is obtained by mixing cement, water and aggregates.
Its production involves a number of operations according to prevailing
site conditions. The ingredients of widely varying characteristics can be
used to produce concrete of acceptable quality. The strength, durability
and other characteristics of concrete depend upon the properties of its
ingredients, the proportions of the mix, the method of compaction and
other controls. The popularity of concrete as a construction material is
due to the fact that it is made from commonly available ingredients and
can be tailored to functional requirements in a particular situation.
Among the various properties of concrete, its compressive strength is
considered to be the most important. However, workability of concrete
plays an important role in the mix design. Other factors such as W/C
ratio, Fineness modulus of aggregate and specific gravity of cement
have their own importance in mix design.
1.2 PROBLEM IDENTIFICATION AND OBJECTIVES
The development of normal concrete i.e. mix design is carried out using
certain empirical relationships among design parameters, developed
from the past experience. A normal concrete mix of required strength
can be achieved after carrying out several trials on the mix proportions.
Artificial neural network (ANN) is a network consisting of several nodes,
known as neuron. The connection between these neurons carries weights,
which defines the relationship between input and output data. ANN is a
technique that can be used for the problems, where no solution algorithm is
known. The mix design of concrete can be put under same category of
problems. Again, development of concrete, which requires large sets of
trial, is a very complex problem in itself. The feature of ANN to establish
relationship between input and output data can be used to obtain some kind
of relationship between various design parameters of normal as well as
high performance concrete. Use of Artificial Neural Network for the
development of Concrete mix may reduce the requirement of large number
of trials .To develop Artificial neural Network model for concrete mix design,
sufficient set of mix proportions with corresponding characteristic strength,
water content and fineness modulus of aggregate are required for training
of the neural network . Since sufficient data for mix design is not available,
mix design data corresponding above mentioned characteristics has been
generated experimentally for the training of the ANN.
. Further using this data ANN modeling is done for concrete mix design.
Following are the main objectives of the present study ;
1) To obtain data of mix design experimentally.
2) Training and testing of ANN model for concrete mix design using data
generated in (1)
1.3 SCOPE OF THE WORK
The scope of the present work consists of training and testing of ANN
for field of concrete mix design. The present work is limited to normal
concrete.
1.4 ORGANISATION OF THE THESIS
The Thesis has been divided into seven Chapter 2 deals with the review
of literature. Chapter 3 presents the concrete mix design principles,
methods and particularly ACI method which has been used in present
study for concrete mix design. Chapter 4 presents the Artificial Neural
Network modeling with special emphasis on back propagation learning
network and is used in present work. Chapter 5 presents the results and
discussions and finally in Chapter 6 conclusions of the study is
presented.
CHAPTER 2
LITERATURE REVIEW
This chapter presents the work done by various researchers, for concrete mix
design using various analytical techniques. Extensive literature is available on
experimental studies for mix design , however very few literature is available on
concrete mix design using analytical techniques.
Krishna Raju et. al.(1989) presented the results of a comparative study of
different methods of designing concrete mixes. Using ordinary Portland
cement, sand and crushed granite aggregates, mixes for concrete of grades
M15, M30 and M45 were designed in accordance with Road note no 4, the
American Concrete Institute and Murdock’s methods. Standard tests were
carried out to determine the compressive strength of concrete. The results
show that Murdock’s method used the minimum cement content for concrete in
the low and medium strength ranges, and Road note no 4 method gave
economical mixes especially in the high strength range.
Ken W. Day presented the Conad Software system for concrete mix design.
The concrete producer, with no special computer expertise can use the Conad
software system both to rapidly upgrade the practitioner’s technological base
and to produce a more economical and consistent concrete. This software
system can enable concrete practitioners with limited access to computers to
make full use of permission to vary mix proportions. The purpose of utilising
this technology is to reduce concrete variability and to produce a quality
product.
Popovics et.al.(1996) described that computerised calculations for concrete
proportioning is not a new idea. Nevertheless, with new developments both in
concrete technology and in computer science. It is possible to further improve
the proportioning software. The novelty of the software is that it is not restricted
to the relationships recommended by ACI which are limited to the standard 28-
day compressive strengths. The software package was presented for which
both the ACI 211.1 procedures can be performed and new strength formulas
can be simply established for a given group of concretes. These new strength
formulas are valid regardless of the nature and type of the cement used, the
class and quantity of the admixtures, type of strength, age, air content, or
curing procedure.
Ganju(1996) presented a method of design of trial mixes, enabling the direct
computation of ingredients using formulae without the use of any charts or
tables. Trial mix design examples given by Mindess and Young, Popovics, and
Mehta are recalculated using spreadsheet and Microsoft Excel and compared
calculated values with other methods based on ACI and British practice.
Improvements to the proposed method are possible by including specification
constraints, such as tables in spreadsheet. Spreadsheets are the preferred
format for calculations as they can be used interactively.
CHAPTER 3
CONCRETE MIX DESIGN
3.1 GENERAL
The present chapter discusses the conventional method of concrete mix
design. The concrete mix design is a process of selecting the suitable
ingredients of concrete and determining their most optimum proportions
which would produce, as economically as possible, concrete that
satisfies the job requirements, i.e. the concrete having a certain
minimum compressive strength, the desired workability and durability. In
addition to these requirements, the cement content in the mix should be
as low as possible to achieve maximum economy. The proportioning of
the ingredients of concrete is an important part of concrete technology
as it ensures the quality and economy.
3.2 PRINCIPLES OF CONCRETE MIX DESIGN
Proportioning of a concrete mix comprises of determining the relative
quantities of materials to be used in production of concrete for a given
purpose. The process of selecting proportions of these materials is
called "Concrete Mix Design" and should not be misunderstood with
structural design. Proportioning may be based on certain data obtained
by practical experience and investigations of test results of various
ingredients or an empirical data. The process of mix design involves the
consideration of properties and costs of ingredients. Requirements of
placing and finishing the fresh concrete and properties of hardened
concrete such as strength, durability, and volumetric stability etc.
The main objectives of the concrete mix design can thus be started as
production of concrete, which shall be:
Satisfying the requirements of fresh concrete (Workability).
Satisfying the properties of hardened concrete (Strength and
durability).
Most economical for the desired specifications and given materials at
a given site.
Performing most optimally in the given structure under given
conditions of environment.
The concrete mix design is based on the principles of
Workability of fresh concrete.
Desired strength and durability of hardened concrete which in turn is
governed by water-cement ratio law
Conditions at the site, which helps in deciding workability, strength
and durability requirements.
3.3 INGREDIENTS OF CONCRETE MIX
The main ingredients are:
Cement
Aggregates
Water
3.3.1 CEMENT
Cement is most important ingredient of any type of concrete. It
determines the strength and other properties of both fresh hardened
state of concrete. Selection of type of cement depends upon the specific
requirements of concrete. Properties considered in selection of the
cement are a) compressive strength at various ages, b) Fineness, c)
heat of hydration, d) alkali content, e) C3A content, f) its compatibility
with admixture.
3.3.2 AGGREGATES
Aggregate consists of the largest volume of concrete and can be divided
into two categories: 1) coarse aggregate, and 2) fine.
3.3.2.1 Coarse Aggregate (CA)
The coarse aggregate is the strongest component of concrete.
Selection of Coarse aggregate depends upon the properties such as
crushing strength, durability, modulus of elasticity, maximum size,
gradation, shape and surface characteristics, flakiness and
elongation indices and presence of deleterious particle.
The maximum size of aggregate is governed by two factors. First,
bigger size particles cause concentration of stresses around particles
due to differences between elastic moduli of paste and the
aggregate, leading to failure of bond between mortar and aggregate.
Second, the crushing process, generally, takes place along potential
zone of weakness within the parent rock and thus removes them. So,
smaller particles of coarse aggregate fraction are likely to be
stronger than larger ones
3.3.2.2 Fine aggregates or Sand
Fine aggregates with rounded particle shape and smooth texture
have been found to require less mixing water. Sand having fineness
modulus (FM) below 2.5 introduces stickiness into the concrete,
making it difficult to compact. Sand with FM of about 3.0 gives best
workability and compressive strength. Sand particle should pack to
give minimum void ratio, as higher void ratio leads to requirement of
more water. Finally, sand should be free from deleterious materials
like clay, silt content and chloride contamination etc.
3.3.3 WATER
Water conforming to requirement of IS: 456 has been found to be
suitable for producing concrete mix. In concrete mix, the water
requirement is reduced to the value required for hydration of cement,
as excess water leads to formation of void in hardened cement paste
phase of concrete. In general, water fit for drinking is fit for production
of concrete.
The various impurities in water such as chloride, sulphate, carbonate,
salt, etc. affects the setting/ hardening characteristics of concrete and
causes reduction of both initial and final strength. The salt content of
water should also be limited, from point of view of its affect on initial
hydration rate of cement, as this may lead to rapid loss of workability
on account of higher amount of heat generated.
3.4 DESIGN METHODS
The methods of design followed all over the world are essentially
similar, except that each country has its own set of tables and graphs
for the calculation of density, water required for workability and
strength, etc., based on the type of aggregates and cement available
locally. Only minor variations exist in different mix design methods in
the process of selecting the mix proportions.
Some of the common mix design methods are :-
Mix Design According to Indian Standard Recommended guidelines
ACI Mix Design method
USBR Mix Design Method
British Mix Design Method
Trial And Error Method
In the present work ACI design method has been used for developing
mix design data. The ACI design method has been discussed briefly in
the following paragraphs.
3.5 ACI method of mix proportioning
The ACI method is based on the fact that for a given maximum size of
well-shaped aggregate, the water-content (kg/m³) determines the
workability of mix, i.e. it is largely independent of mix proportions. The
method further assumes that the optimum ratio of the bulk volume of
coarse aggregate to the total volume of concrete depends only on the
maximum size of aggregate and on the grading of fine aggregate. The
optimum volume of coarse aggregate when used with fine aggregate of
different fineness moduli is given in Table 3.3. Having determined the
maximum size and type of available aggregate, the water content for a
specified workability is selected from the Table 3.2 and bulk volume of
coarse aggregate from the Table 3.3. The water cement ratio is
determined as in other methods to satisfy both strength and durability
requirements. The air content in concrete is taken into account for
calculating the volume of fine aggregate. The step-in-step procedure
adopted for the selection of mix proportions is as follows.
(i) The water-cement ratio is selected from Table 3.1 for the average
strength.
(ii) The maximum size of the coarse aggregate to be used is determined
by sieve analysis. The degree of workability is decided depending
upon the placing conditions, etc.
(iii) The water content is selected from Table 3.2 for the desired
workability and maximum size of aggregate.
(iv) The cement content is calculated from the water content and water-
cement ratio required for strength and durability.
(v) The coarse aggregate content is estimated from Table 3.3 for
maximum size of aggregate and fineness modulus of sand.
(vi) The content of fine aggregate is determined by subtracting the sum
of absolute volumes of the coarse aggregate, cement, water and
entrained air from unit volume of concrete. Trial batches are tested
and final proportions are obtained by adjustments.
Table 3.1 Relationship between water-cement ratio and average compressive
strength (ACI Manual of Concrete Practice, Part I, 1979).
Compressive Strength at Water-cement Ratio by Mass
28-days, Mpa Non-air-entrained concrete Air-entrained concrete
45 0.38 -
40 0.43 -
35 0.48 0.40
30 0.55 0.46
25 0.62 0.53
20 0.70 0.61
15 0.80 0.71
Table 3.2 Approximate water requirements for different slumps and
maximum size coarse aggregate (ACI Manual of Concrete Practice, Part I, 1979).
Slump, mm Mixing water (kg/m³ of concrete) for maximum sizes of aggregate, mm
10 12.5 20 25 40 50 70 150
Non-air entrained concrete
30-50 205 200 185 180 160 155 145 125
80-100 225 215 200 195 175 170 160 140
150-180 240 230 210 205 185 180 170 -
Approx.
percentage 3.0 2.5 2.0 1.5 1.0 0.5 0.3 0.2
of entrained
air
Air-entrained concrete
30-50 180 175 165 160 145 140 135 120
80-100 200 190 180 175 160 155 150 135
150-180 215 205 190 185 170 165 160 -
Recommend
ed average
total air 8.0 7.0 6.0 5.0 4.5 4.0 3.5 3.0
content, per
cent
Table 3.3 Bulk volume of coarse aggregate (ACI Manual of Concrete
Practice, Part I, 1979)
Maximum size of Bulk volume of dry-rodded coarse aggregate per unit
aggregate, mm volume of concrete, for fineness modulus of fine aggregate
2.40 2.60 2.80 3.00
10 0.50 0.48 0.46 0.44
12.5 0.59 0.57 0.55 0.53
20 0.66 0.64 0.62 0.60
25 0.71 0.69 0.67 0.65
40 0.76 0.74 0.72 0.70
50 0.78 0.76 0.74 0.72
70 0.81 0.79 0.77 0.75
150 0.87 0.85 0.83 0.81
CHAPTER 4
ARTIFICIAL NEURAL NETWORKS
4.1 GENERAL
Artificial Neural Network (ANN) is a network of artificial neurons, an
information processing units, is inspired by the way in which human
brain performs a particular task or function of interest. A neural network
is a computational method inspired by studies of the brain and nervous
systems in biological organism. Artificial Neural Network represents
highly ideological mathematical models of our present understanding of
such complex systems. Artificial Neural Network models have the ability
to learn and generalise the problems even when input data contain error
or incomplete.
4.2 INTRODUCTION TO ARTIFICIAL NEURAL NETWORK
4.2.1 The Biological Neuron
The human brain is estimated to contain about 1011 interconnected
neurons. The structure of a biological neuron consists of a central cell
body, an axon and a multilayer of dendrites. Fig 4.1a shows the
structure of a biological neuron. The output from the cell is transmitted
through the axon to synapses where the outputs are increased or
decreased depending upon the synaptic weights before passing it to the
next neuron. The dendrites are the points where the input signals are
received. The inputs are got from the sensory organs or from other
neurons. The signal as they pass through the network creates different
levels of activations in the neurons. The identification or recognition
depends on the activation levels of the neurons.
4.2.2 The Artificial Neuron
An Artificial Neuron (refer Fig.4.1b) models the behaviour of the
biological neuron. Each Artificial neuron receives a set of inputs. Each
input is multiplied by weight analogous to synaptic strength. The sum of
all weighted inputs determines the degree of firing called the activation
level. Notationally if I1……Ii……..In, are the input values and w1… … wij
… ..wnj are synaptic weight values, netj is the summation (over all the
incoming neurons) of the product of the incoming neuron’s activation
and synaptic weight of connection at the typical jth neuron expressed as
Ê Ii.wij. A threshold value ji is incorporated into the output. Thus we
have the resultant.
net j = Ê I i w ij + j i
n
i =1
where n is the number of incoming neurons.
Output from jth neuron can be expressed as
Oj = g (netj)
g is activation function.
There are several conventionally used choices for the activation
function, which is also called as threshold function, transfer function or
squashing function. The most commonly used activation functions are
Linear Function, Nonlinear Function, Step Function, Sigmoid Function
and Hyperbolic Tangent Function. Out of the above mentioned
activation functions, Sigmoid Function is very popular. Output of jth
neuron using sigmoid function can be written as: -
Oj =
1
1 + exp(- net j )
Sigmoid Function is most commonly used activation function as it is
continuous function and it has a very simple derivative that is useful for
development of learning algorithms, and also it represents the
processing of biological neuron. Fig.4.2 shows the sigmoid function. It
can be seen from Fig.4.2 that the output of neuron is low when input is
low, and output is high when the input is also sufficiently high.
4.2.3 The Artificial Neural Network (ANN)
The ANN is a crude imitation of the biological neural network. It consists
of several layers of artificial neurons. The first layer is called input layer
and the last layer is called output layer. The layers in between the input
and output layers are called hidden layer. The layers are connected by
weighted pathways.
4.2.4 Types of ANN
Artificial Neural Network models are specified by net topology, node
characteristics and training or learning rules. These rules specify an
initial set of weights and indicate how weights should be adopted during
use of improve performance. A crude classification can be done on the
grounds of type input patterns (i.e. either binary inputs or continuous
valued inputs). This can further be refined on the basis of types of
learning (i.e. supervised on unsupervised). A complete classification of
different neural networks is presented in Fig.4.3.
4.2.5 Learning of the ANN
The main phase in neural network is learning. The programming for
adjusting the weights is called training and the training can be done
either by given weights computed from a set of training data or by
automatically adjusting the weights according to some criterion.
Learning performance is improved by iteratively updating the weight in
the network. The learning of ANN generally classified as:
(a) Supervised Learning: ANN is trained as a set of input-output
patterns. The weights are adjusted to minimise the error between
network output and actual output.
(b) Unsupervised Learning : In unsupervised training the network
adjusts its weights in response to input patterns without the
benefit of target answers. In this type of learning network
classifies the input patterns into similarity categories.
(c) Reinforcement Learning : It is a variant of supervised learning in
which network attempts to learn the input-output patterns through
trial and error with a view to minimise a performance index called
the reinforcement signal. The target values are not provided for
learning instead of error signal of the output are given to the
ANN. This process is analogous to record or punishment.
4.3 BACK-PROPAGATION LEARNING NETWORK
4.3.1 Introduction
In ANN theory there are many paradigms developed to update
synaptic weights. Out of them back-propagation is the most
widely used of the neural network paradigms and has been
applied successfully in application studies in a broad range of
areas. Back-propagation can attack any problem that requires
pattern mapping. Given an input pattern, the network produces
an associated output pattern. In the present study Back-
propagation learning algorithm has been used.
Supervised learning is used in training of Back-propagation
learning network. Back-propagation employs three or more layers
of processing units. It includes an input layer, hidden layer
(atleast one) and an output layer. Input units do not process
information. They simply distribute information to other units.
Generally Back-propagation learning algorithm applies two basic
steps, (i) feed forward calculation, (ii) error back propagation
calculation.
Fig. 4.4 shows the typical multi-layer feed forward Back-
Propagation network. A weight is associated with each
connection from input to hidden units and from hidden units to
output units. Each unit in the input layer is connected to every
unit of hidden layer, likewise each unit in the hidden layer is
connected to each unit of next hidden layer (if more than one
hidden layer is present) or to each unit of output layer.
Bias unit (optional) has been employed to every layer but to
output layer. This can be achieved simply by adding a constant
input with an appropriate weight. This unit has a constant
activation of 1. Each bias unit is connected to all units in the next
higher layer, and its weights to them are adjusted like the other
weights. The bias units provide a constant term in the weighted
sum of the units in the next layer. This some times results in
improvement on the convergence properties of the network.
4.3.2 Feed Forward Computation
The input vector representing the pattern to be recognised is
incident on the input layer and distributed to subsequent hidden
layers and finally to output layer via weighted connections. Each
neuron in the network operates by taking sum of its weighted
input and passing the result through a nonlinear activation
function.
The net input to the hidden unit j(netpj) is described as :
net pj = Ê w ji I i + y j
n ….(4.1)
i =1
wji - weight from neuron i (source) to node j (destination)
II - input value of neuron i
yj - weight from bias unit to neuron j
The output of a hidden unit (Hj) as a function of its net input is
given by
Hj = g (netpj)
=
1 ….(4.2)
1 + exp(- net pj )
g = sigmoid function
Net input (netpk) to the each output layer unit and the output (Opk)
from each output unit are calculated in analogous manner as
described by following equation:
net pk = Ê w kjH pj + y k
m ….(4.3)
k =1
O pk =
1 ….(4.4)
1 + exp(- net pk )
where wkj = weights from node j to k
Yk = weight from bias unit to node k
The set of calculations that results in obtaining the output state of
the network is carried out the same way for training as well
testing phase. The test mode just involves presenting input set to
input units and calculating the resulting output state in a single
forward pass.
4.3.3 Error Back-propagation
Very first thing required to be recognised in the training is the
need for a measure of classes of network to an established
desired value. This measure is network error. Since network
deals with supervised training, desired value is known for the
given training set.
For Back-propagation learning algorithm (Rumelhart et al. 1986)
an error measure known as the mean square error is used. The
mean square error is defined as:
Ep = Ê (Tpj - O pj ) 2
1 n
… .(4.5)
2 j1
where Tpj = target (desired) value of jth output unit for pattern p
Opj = actual output obtained from jth output unit for pattern p.
In training phase of Back-propagation learning algorithm, the total
error of the network is minimised by adjusting the weights.
Gradient descent method is used for this. Each weight may be
thought of as a dimension in N-dimensional error space. In error
space the weights act as independent variables and the shape of
the corresponding error surface is determined by error function in
combination with the training set. The negative gradient of the
error function with respect to the weights, thus points in the
direction, which will most quickly reduce the error function. This
can be expressed as
- E p
D p w ji
… .(4.6)
w ji
Where Dpwji designates the change in the weight connecting a
source neuron I in the layer L-1 and a destination neuron, j in
Ep
w ji
layer L (refer Fig. 4.5). Applying chain rule to evaluate
E p E p net pj
=
w ji net pj w ji
*
netpj = Êwji Opi
where Opi is output of all neurons in the L-1 layer.
net pj
( Ê w ji O pi )
=
w ji w ji
= Opi
Hence
E p E p
= O pi … .(4.7)
w ji net pj
Defining error signal dpj as
E p
d pj = - … .(4.8)
net pj
Combining 4.7 and 4.8
E p
- = O pi d pj
w ji
… .(4.9)
Putting this value in equation 4.6
Dpwji = h Opi dpj … .(4.10)
where h is learning rate parameter. The learning rate parameter
determines the amounts of weight change will be used for the
weight correction.
In order to achieve a useable difference equation that can be
used for mathematically dpj can be found by applying chain rule
again, as follow: -
E p E p O pj
d pj = - =
net pj O pj net pj
* (4.11)
since Opj = g(netpj)
O pj
= g (net pj )
net pj
(4.12) Opj
To evaluate , two cases are considered individually
pj net
(i) the destination unit j is an output layer unit
(ii) the destination unit j is a hidden layer unit
For a destination unit j in the output layer Ep can be directly
accessed as a function of Opj
( Ê (Tpj - O pj ) 2
1
E p
= 2
O pj O pj
E p
à = -(Tpj - O pj )
O pj
Putting this in 4.11
dpj = (Tpj - Opj) g ’ (netpj) … .(4.13)
For a destination unit j in the hidden layer, error function can not
be differentiated. So applying chain rule again.
E p E p net pk
=Ê
O pj net pk O pj
… .(4.14)
k
In equation 4.14 the sum (S) k is over all units in the L+1 layer
(Ref. Fig.4.5). Putting the value of netpk
net pk (Ê w kj O pj )
=
O pj O pj
= wkj
putting this in equation 4.14
E p E p
=Ê
O pj net pk
.w kj
k
E p
= Ê d pk .w kj
O pj
… .(4.15)
k
Now combining equation 4.11, 4.12 and 4.15
d pj = g (net p )Ê d pk w kj … .(4.16)
k
Equation 4.10 provides the difference equation in terms of dpi.
This is valid for both hidden and output layer weights. Equation
4.13 and 4.16 specify dpj for the output layer and hidden layer
weights, respectively. To obtain difference equation suitable for
use on digital computer, f (netpj) is evaluated as follow: -
O pj =g (net pj ) =
1
1 + exp(- net pj )
O pj Î ÞÎ Þ
= g (net pj ) Ï ß Ï1 - ß
1 1
net ÐÏ1 + exp(-net pj ) àß ÐÏ 1 + exp(- net pj ) àß
g (netpj) = Opj (1-Opj)
To summarise, the difference equation required for Back-
propagation training is:
Dwji = hdpj Opi … .(4.17)
where h refers to learning rate parameter; dpj refers to the error
signal at neuron j in the layer L-1 (Ref. Fig.4.5) and with the error
signal, dpj, given by
dpj = (Tpj – Opj) Opj (1-Opj) for output neurons
dpj = Opj (1-Opj) Êd
k
pk w kj for hidden neurons
where, Opj refers to layer L; Opi refers to layer L-1 and dpk refer to
layer L+1.
4.3.3.1 Selection of Learning Rate Parameter
The learning rate determines what amount of calculated error
sensitivity to weight change will be used for weight correction. Its
value varies between 0.0 an d 1.0. The best value of learning rate
depends on the characteristics of the error surface i.e. a plot of E
versus wji. If surface changes rapidly, the gradient calculated on
local information will give a poor indication of the true ‘right path’.
In this case, a smaller rate is desirable. On the other hand, if the
surface is relatively smooth, a large learning rate will speed
convergence. A general rule might be to use such a value of
learning rate parameter that it may not cause the system to
oscillate and thereby slow or prevent the network convergence.
4.3.3.2 Momentum Constant
In practice a momentum term is frequently added to equation
4.17 as an aid to more rapid convergence in certain problem
domain. The momentum takes into account the effect of past
weight change. The moment constant, b, determines the
emphasis to place on this term. Momentum has the effect of
smoothing the error surface in weight space by filtering out
adjusted in the presence of momentum by:
Dwji (n+1) = h (dpj Opi) + b Dwji n … .(4.18)
4.3.3.3 Pattern Presentation and Weight Adjustment
There are two ways of pattern presentation and weight
adjustment of the network:
(i) One way involves propagating the error back and adjusting
weight after each training pattern is presented. This is called
single pattern training.
(ii) Another way is Epoch training. One full presentation of all
patterns in the training set is termed an epoch. Error is back
propagated based on total network error d. This is used in the
present study for training neural network.
4.3.3.4 Selection of Initial Weights
Initial weights are taken as ‘small’ random number over some
range. It is found a range between –0.3 to +0.3 speeds training
(Lee 1989). When the weights associated with a neuron grow
sufficiently large, the neuron operates in the region with in which
the activation function approaches limits (in the case of sigmoid
function 1 or 0). With this derivative of activation function will be
extremely small. Referring to equations 4.10, 4.13, and 4.16 it is
evident that when derivation of activation function approaches
zero the weight adjustment made through back-propagation also
approaches zero, which results in ineffective training.
4.3.3.5 Problem of Local Minima
Sometimes during the training of the network, it could be trapped
in local minima rather than proceeding towards global minima for
the error function. This can be avoided either by changing the
learning parameter or by changing the number of hidden units.
4.3.3.6 Normalisation of the Training Data Set
Normalisation of the training data set is required before
presenting it to the network for its learning, so that it satisfies the
activation function range. Normalisation is also necessary if there
is a wide difference between the range of features values.
Normalisation process enhances the learning speed of the
network and it avoids the possibility of early network saturation. In
the present study normalisation of each variable present in the
training data set is done separately, by dividing it with the
maximum value for the each variable.
CHAPTER-5
RESULTS & DISCUSSIONS
5.1 EXPERIMENTAL PROGRAM
In the present study concrete mix design has been done by ACI design method.
Locally available sands of fineness modulus 2.4 and 2.6 are used. Coarse
aggregates of 10mm and 20mm are used in the ratio of 50:50 and 67:33 of total
volume. Ambuja Cement of 53 grade is used . The detailed calculations are
given in Appendix A. For each mix proportion nine cubes of 150 mm size were
casted in the laboratory and tested in Automatic Compression Testing Machine
(ACTM) for 7,14and 28 days. Concrete mix proportioning and its compressive
strength are given in Table 5.1 and are discussed in the following paragraphs.
TABLE 5.1 CONCRETE MIX PROPORTIONING AND ITS COMPRESSIVE
STRENGTH
MIX W/C WATER FINENESS COARSE 7 DAYS 14 DAYS 28 DAYS
2 2
RATIO RATIO Kg MODULUS AGGREG (N/mm ) (N/mm ) (N/mm2)
ATE
RATIO
(10mm,20
mm)
1:49:2.28 0.40 185 2.4 1:1 22.22 28.08 32.17
1:1.56:2.21 0.40 185 2.6 1:1 27.68 32.71 38.4
1:1.4:2.22 0.40 190 2.4 1:1 24.66 26.68 29.88
1:1.47:2.15 0.40 190 2.6 1:1 27.35 28.68 35.88
1:1.49:2.23 0.40 185 2.4 2:1 24.75 27.66 29.77
1:1.56:2.21 0.40 185 2.6 2:1 27.42 34.95 39.37
1:1.4:2.22 0.40 190 2.4 2:1 23.15 29.51 31.48
1:1.47:2.15 0.40 190 2.6 2:1 23.55 32.40 34.86
1:1.62:2.4 0.42 185 2.4 1:1 18.04 26.17 24.64
1:1.68:2.32 0.42 185 2.6 1:1 19.86 28.91 27.68
1:1.51:2.33 0.42 190 2.4 1:1 24.33 30.6 30.66
1:1.69:2.26 0.42 190 2.6 1:1 26.15 32.62 34.2
1:1.62:2.4 0.42 185 2.4 2:1 19.90 27.22 27.82
1:1.68:2.32 0.42 185 2.6 2:1 25.73 30.51 32.55
1:1.51:2.33 0.42 190 2.4 2:1 26.02 35.35 37.4
1:1.69:2.26 0.42 190 2.6 2:1 27.82 37.6 39.22
1:1.74:2.51 0.44 185 2.4 1:1 17.55 21.95 24.77
1:1.82:2.43 0.44 185 2.6 1:1 20.64 22.46 26.95
1:1.63:2.45 0.44 190 2.4 1:1 23.77 27.11 34.68
1:1.72:2.37 0.44 190 2.6 1:1 27.35 31.71 34.68
1:1.74:2.51 0.44 185 2.4 2:1 20.0 21.53 25.97
1:1.82:2.43 0.44 185 2.6 2:1 22.8 28.42 34.8
1:1.63:2.45 0.44 190 2.4 2:1 24.6 29.88 31.35
1:1.72:2.37 0.44 190 2.6 2:1 26.0 36.48 38.86
1:1.87:2.65 0.46 185 2.4 1:1 18.97 22.02 23.84
1:1.94:2.45 0.46 185 2.6 1:1 19.22 25.55 28.55
1:1.76:2.56 0.46 190 2.4 1:1 22.33 25.82 25.97
1:1.84:2.48 0.46 190 2.6 1:1 23.48 26.42 28.97
1:1.87:2.65 0.46 185 2.4 2:1 18.62 24.24 25.33
1:1.94:2.45 0.46 185 2.6 2:1 19.64 26.42 28.97
1:1.76:2.56 0.46 190 2.4 2:1 19.98 26.75 29.32
1:1.84:2.48 0.46 190 2.6 2:1 26.2 29.13 34.6
1:1.99:2.74 0.48 185 2.4 1:1 14.44 19.06 23.06
1:2.07:2.65 0.48 185 2.6 1:1 20.8 24.75 31.95
1:1.88:2.67 0.48 190 2.4 1:1 16.11 21.77 26.84
1:1.96:2.58 0.48 190 2.6 1:1 20.4 22.73 32.55
1:1.99:2.74 0.48 185 2.4 2:1 14.08 17.91 19.53
1:2.07:2.65 0.48 185 2.6 2:1 26.0 29.77 27.64
1:1.88:2.67 0.48 190 2.4 2:1 15.71 18.57 25.57
1:1.96:2.58 0.48 190 2.6 2:1 17.66 23.88 28.57
1:2.11:2.85 0.50 185 2.4 1:1 13.91 17.93 21.82
1:2.2:2.76 0.50 185 2.6 1:1 21.60 21.82 24.88
1:2.0:2.78 0.50 190 2.4 1:1 16.11 21.77 26.84
1:2.08:2.69 0.50 190 2.6 1:1 20.4 22.73 32.55
1:2.11:2.85 0.50 185 2.4 2:1 14.08 17.91 19.53
1:2.2:2.76 0.50 185 2.6 2:1 26.0 29.77 27.64
1:2.0:2.78 0.50 190 2.4 2:1 15.71 18.57 25.57
1:2.08:2.69 0.50 190 2.6 2:1 17.66 23.88 28.57
1:2.24:2.97 0.52 185 2.4 1:1 15.08 20.26 24.84
1:2.32:2.87 0.52 185 2.6 1:1 17.13 23.0 28.0
1:2.12:2.89 0.52 190 2.4 1:1 17.82 23.2 25.0
1:2.21:2.80 0.52 190 2.6 1:1 24.31 27.57 28.9
1:2.24:2.97 0.52 185 2.4 2:1 13.84 17.8 25.6
1:2.32:2.87 0.52 185 2.6 2:1 15.66 20.4 29.3
1:2.12:2.89 0.52 190 2.4 2:1 16.91 20.13 25.97
1:2.21:2.80 0.52 190 2.6 2:1 19.57 29.46 29.77
5.2 RESULTS and DISCUSSIONS
5.2 Effect of Fineness Modulus
Fineness Modulus has less effect on the compressive strength. For the Fineness
Modulus of 2.4, the increase of compressive strength is not so high as compared
to the Fineness Modulus of 2.6 for example when w/c ratio is 0.4, water content
185 kg/m³, 10mm and 20mm coarse aggregate ratio (50:50), at FM 2.4, the
compressive strength at 7, 14 & 28 days is 22.22 N/mm², 28.08 & 31.17
N/mm², respectivly and the corresponding values at FM 2.6 are 27.68 N/mm²,
32.71 N/mm² & 38.4 N/mm² ( refer Table 5.1) The rate of increase of strength
is 35 % for FM 2.4 and 38% FM 2.6.
Out of the two FM used, the strength of concrete with FM 2.4 is more affected
by the change in w/c ratio. For FM 2.6 the 28 days compressive strength (185
kg/m³ water content and 50:50 ratio of 20 & 10 mm Coarse aggregate used)
varies from 28.4 N/mm² to 28 N/mm² when w/c ratio changed from 0.4 to 0.52.
The corresponding values at FM 2.4 are 32.17 N/mm² & 24.84 N/mm² This can
be seen by comparing figures 5.1 and 5.3. From fig 5.5 to 5.8 it is observed that
for same water cement ratio, compressive strength at FM 2.6 is higher than at
FM 2.4 and it also increases with increase in CA ratio from (67:33) to (50:50).
It is also observed that for 185 kg of water content 28 days compressive
strength is almost same for FM 2.4 and FM 2.6 (Fig. 5.15 to 5.17). Similar
results were obtained for 190 kg of water.
5.3 Effect of the size of coarse aggregate
The strength of concrete by using 20 mm & 10 mm size aggregate in the ratio
(67:33) is higher than the strength by using the ratio of 50:50. The effect is
more pronounced at lower w/c ratio. From fig 5.1 it is clear that for coarse
aggregate size ratio (67:33) the 7, 14, 28 days strength is more than the coarse
aggregate ratio (50:50). Pattern remains the same for FM 2.6 & FM 2.4 (ref. fig.
5.3 & 5.4). Figures 5.9 and 5.10 shows that for FM 2.6 , compressive strength
significantly increase with increase in percentage of coarse aggregate ratio
whereas for fineness modulus 2.4 compressive strength increases upto 14 days
and then it starts decreasing with increase in percentage of coarse aggregate.
5.4 Effect of water cement ratio
Variations of compressive strength by changing the water cement ratio have
been plotted in figures 5.15 to 5.18. It has been observed that as the water
cement ratio increases, compressive strength decreases irrespective of the
amount of cement, fineness modulus or the ratio of aggregate used. From figs.
5.15, 16, 17 & 18 it is observe that compressive strength at lower w/c ratio is
higher than at higher w/c ratio.
5.5 ANN MODELLING OF CONCRETE MIX DESIGN
ANN can map the relationship between the input parameters and the
corresponding output parameters. The ability of the neural network approach is
to train a given data set, and on that basis, to predict missing data and also to
achieve possible normalisation, makes it an attractive proposition for
knowledge acquisition for problems where there is no acceptable theory. In the
present study Back Propagation Neural Network was used for finding out the
ratio of fine aggregate and coarse aggregate.All the input and output data have
been normalised by maximum value ( which is termed as normalising factor)
for each parameter so that the values remain between 0 to +1. The output of the
network is obtained in the form of normalised output which is then converted to
actual values by multiplying each value by corresponding normalising factor as
used for preparing the training set. The initial weights have been set as random
number between the range of –0.3 to + 0.3
5.5.1 SELECTION OF LEARNING RATE PARAMETER & MOMENTUM
CONSTANT
The learning rate parameter and momentum coefficient were kept constant
throughout the training of the network. The learning rate parameter and
momentum constant are kept 0.15 and 0.75 respectively
5.5.2 ARCHITECTURE OF NETWORK
Architecture of the network has been selected by trial and error to
minimise the error and to obtain speedy convergence. The network used
for the training of the data consists of two hidden layers with an input
and output layer. An input layer has five neurons representing input
parameters which are i) 28 days compressive strength ii) Fineness
Modulus iii) Coarse Aggregate ratio iv) Water content and v) Water
cement ratio. Output layer has two neurons which represent i) Fine
aggregate mix proportion ii) Coarse aggregate mix proportion. Each
hidden layer consists of thirty neurons. Non linear sigmoid function has
been used as activation function.
5.5.3 SELECTION OF TRAINING DATA SET
Selection of training data set for the training of the network is the most
important step. In preparing the training data set different conditions
have to be considered which include the size of the network, learning
rate parameter and momentum coefficient. Increasing the number of
pattern increases the potential level of accuracy that can be achieved by
the network. A large number of training patterns however can
sometimes overwhelm training algorithm. Consequently, there is no
guarantee adding more patterns leads to improved solutions. All values
of water content, fineness modulus, coarse aggregate ratio and 28 days
compressive strength corresponding to water cement ratio 0.42, 0.44,
0.46, 0.48 and 0.50 have been taken for the training of the network
whereas all the values of water content, fineness modulus and 28 days
compressive strength corresponding to water cement ratio 0.40 and o.52
have been taken for prediction. The mean square error between target
output and network output has been reduced to 0.0005.
5.5.4 TESTING OF THE NETWORK
The variations of water content, fineness modulus, coarse aggregate
ratio and 28 days compressive strength for each water cement ratio have
been trained as data set The network output values of fine aggregate mix
proportion and coarse aggregate mix for water cement ratio of 0.40 and
0.52 have been considered for testing of the network. The network
output values have been plotted in figures 5.19 to 5.22 in the form of bar
chart and their corresponding experimental values are also being
plotted. These results confirm that neural network has been trained
properly as all the output given by the network are with in 2% error.
5.5.5 COMPARISON OF PREDICTED AND EXPERIMENTAL VALUES
The predicted values of mix proportions for water cement ratio 0.42 and
0.50 ( as these data were not included in the training data set ) for all
data set have been plotted in figures 5.23 to 5.26. Their corresponding
experimental value is also being plotted. The predicted values of fine
aggregate and coarse aggregate have been found in close agreement with
the experimental values. The maximum error obtained in the predicted
values of mix proportion is 5%. Training has bee done by taking w/c
ratio, water content, fineness modulus, ratio of coarse aggregate and 28
days compressive strength of concrete as input and proportioning of
concrete ingredients at output. These results are plotted in the graphs
from 5.19 to5.26 and discussed below :
In figure 5.19 and 5.20 experimental results of fine aggregate with
fineness modulus 0.4 and 0.52 were compared with ANN results and it is
found that results were almost same. From figures 5.21 and 5.22 it is
observed that for coarse aggregates also experimental and ANN results
match with each other.
In figure 5.23 experimental and ANN results match with each other
except for 10 number data set, this may be because of faulty
experimental observation or poor worksmanship etc. In figure 5.24
experimental and ANN results match with each other.
In figure 5.25 it is observed that prediction of the coarse aggregate in
exact and the experimental and ANN results are almost identical. From
figure 5.26 it is observed that prediction of coarse aggregate at fineness
modulus 0.5 is 98 percent accurate and thus we can easily predict the
coarse as well as fine aggregate.
REFERENCES
1. ACI Committee 211, “Standard Practice for selecting properties for
normal, heavy weight concrete”, (ACI 211.1-91). American Concrete
Institute, Detroit, 1991.
2. Gambhir, M.L., “Concrete Manual”, Dhanpat Rai and Sons, Delhi 1987.
3. Gambhir, M.L., “Concrete Technology”, Tata Mcgraw Hill, 1993.
4. Ganju, T.N., “Spreadsheeting mix designs”, Concrete International v 18
n 12 Dec. 1996. P 35-38
5. Indian Standards, “Code of practice for plain and reinforced concrete”,
IS 456-2000, 3rd edition, Bureau of Indian Standards, New Delhi.
6. Indian Standards “Specification for coarse and fine aggregate for natural
source of concrete”, IS 383-1963, Indian Standards Institution, New
Delhi.
7. Indian Standards, “Handbook on concrete mixes”, SP: 23 (S&T)-1983,
Indian Standards Institution, New Delhi.
8. Kasperkiewicz Janusz, “Optimisation of concrete mix using a
spreadsheet package”, ACI Materials Journal (American Concrete
Institute) v 91 n 6 Nov-Dec p 551-559.
9. Krishna Raju N., Basavarajaiah B.S., Ramakrishna N, “A comparative
study of concrete mix design procedures”, Indian Concrete Journal, Jan.
1979 p 13-16.
10. Provincs Sandor, Provincs John S., “ Novel aspects in computerisation
of concrete proportioning” , Concrete International, v 18 n 12 Dec. 1996,
p 54-58.
11. N.Krishna Raju and Y. Krishna reddy, “ A critical review of the Indian,
British and American methods of concrete mix design” , The Indian
Concrete Journal, April 1989.
12. Marianne Tange Jepsen, “ Predicting concrete durability by using
artificial neural network” featured at the proceedings “ Durability of
exposed concrete containing secondary cementitious materials,
Hirtshals, November 2002.
APPENDIX A
W/C ratio = 0.4
Water = 190 Kg/m 3
F.M of sand = 2.4
Cement content = 190/0.4 = 475 Kg/m3
Bulk weight of coarse aggregate = 0.66
Mass of coarse aggregate = 0.66 * 1600 = 1056 Kg/m3
Volume of cement = 475/3.15*1000 = 0.158 m3
Volume 0f water = 190/1000 =0.19 m3
Volume of coarse aggregate = 1056/2.64*1000 =0.4 m3
Total volume = ( 0.158 + 0.19 + 0.4 )= 0.74 m3
Volume of fine aggregate required = 1.00 – 0.74 = 0.26 m3
Mass of fine aggregate = 0.26 x 2.64 x 1000 = 665 Kg/m3
Proportions = 1:1.4 : 2.223
CONCLUSIONS
APPENDIX A
REFRENCES
FIGURES
LIST OF FIGURES
5.1 Number of days v/s compressive strength for w/c ratio 0.40, 185 kg water
5.2 Number of days v/s compressive strength for w/c ratio 0.40, 190 kg water
5.3 Number of days v/s compressive strength for w/c ratio 0.42, 185 kg water
5.4 Number of days v/s compressive strength for w/c ratio 0.42, 190 kg water
5.5 Number of days v/s compressive strength for w/c ratio 0.44, 185 kg water
5.6 Number of days v/s compressive strength for w/c ratio 0.44, 190 kg water
5.7 Number of days v/s compressive strength for w/c ratio 0.46, 185 kg water
5.8 Number of days v/s compressive strength for w/c ratio 0.46, 190 kg water
5.9 Number of days v/s compressive strength for w/c ratio 0.48, 185 kg water
5.10 Number of days v/s compressive strength for w/c ratio 0.48, 190 kg water
5.11 Number of days v/s compressive strength for w/c ratio 0.50, 185 kg water
5.12 Number of days v/s compressive strength for w/c ratio 0.50, 190 kg water
5.13 Number of days v/s compressive strength for w/c ratio 0.52, 185 kg water
5.14 Number of days v/s compressive strength for w/c ratio 0.52, 190 kg water
5.15 W/C ratio v/s 28 days compressive strength for 185 kg water, FM 2.6
5.16 W/C ratio v/s 28 days compressive strength for 190 kg water, FM 2.6
5.17 W/C ratio v/s 28 days compressive strength for 185 kg water, FM 2.4
5.18 W/C ratio v/s 28 days compressive strength for 190 kg water, FM 2.4
5.19 Comparison of experimental results with ANN results for prediction of data
(Fine Aggregate, w/c ratio 0.40)
5.20 Comparison of experimental results with ANN results for prediction of data
(Fine Aggregate, w/c ratio 0.52)
5.21 Comparison of experimental results with ANN results for prediction of data
(Coarse Aggregate, w/c ratio 0.40)
5.22 Comparison of experimental results with ANN results for prediction of data
(Coarse Aggregate, w/c ratio 0.52)
5.23 Comparison of experimental results with ANN results for prediction of data
(Fine Aggregate, w/c ratio 0.42)
5.24 Comparison of experimental results with ANN results for prediction of data
(Fine Aggregate, w/c ratio 0.50)
5.25 Comparison of experimental results with ANN results for prediction of data
(Coarse Aggregate, w/c ratio 0.42)
5.26 Comparison of experimental results with ANN results for prediction of data
(Coarse Aggregate, w/c ratio 0.50)