Hardware the patient is identified in the first column.

Hardware Used
CPU specifications: Intel Core
i5-5200U CPU @2.20GHz
RAM specifications: 8.00 GB

Software Used

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

TensorFlow(Python-spyder IDE)
Packages used. numpy,tensorflow

tf.estimator(class of tensorflow)
Defined in: tensorflow/python/estimator/estimator.py.

Function: Estimator class to train and evaluate TensorFlow models.

The Estimator object wraps a model specified by a model_fn, in which the input
is fed and along with other parameters, results the ops for training the set,
determination of accuracy, or predictions.

Dataset Description

In
this work we have used Parkinsons
Telemonitoring Data Set1 from UCI Machine Learning Repository. This data
set is composed of a range of biomedical voice measurements from 42 people with
Parkinson’s disease in its early phase on a six-month trial of a telemonitoring
device for remote symptom progression monitoring.
The attributes of the table are subject number, subject age, subject gender,
time interval from baseline recruitment date, motor UPDRS, total UPDRS, and 16
biomedical voice measures. The rows contain 5,875 voice recordings of these
patients. The objective of the dataset is to predict the motor and total UPDRS
scores (‘motor_UPDRS’ and ‘total_UPDRS’) from the 16 voice measures of Parkinson’s
Telemonitoring Dataset.
The data is in ASCII CSV format. Each row of the CSV file contains an instance
corresponding to one voice recording of each individual. There are around 200
recordings per patient; the subject number of the patient is identified in the
first column.

Attribute Information:
subject#
– Integer that uniquely identifies each subject
age – Subject age
sex – Subject gender ‘0’ – male, ‘1’ – female
test_time – Time since recruitment into the trial. The integer part is the
number of days since recruitment.
motor_UPDRS – Clinician’s motor UPDRS score, linearly interpolated
total_UPDRS – Clinician’s total UPDRS score, linearly interpolated
Jitter(%),Jitter(Abs),Jitter:RAP,Jitter:PPQ5,Jitter:DDP – Several measures of
variation in fundamental frequency.

Shimmer,Shimmer(dB),Shimmer:APQ3,Shimmer:APQ5,Shimmer:APQ11,Shimmer:DDA
– Several measures of variation in amplitude
NHR,HNR – Two measures of ratio of noise to tonal components in the voice
RPDE – A nonlinear dynamical complexity measure
DFA – Signal fractal scaling exponent
PPE – A nonlinear measure of fundamental frequency variation

Data Preprocessing:

We have normalised our dataset using min-max normalization, the formula for the
same is:
                = (Col. Value – minimum
value for that column)/( maximum value for that column- minimum value for that
column)
Normalization of data means adjusting values which has been measured on
different scales to a typically accepted common scale, generally done before
averaging.
Min-max normalisation is often known as feature scaling. Here the values of a
numeric range of an attribute of dataset, commonly called a property, are
reduced to a scale between between 0 and 1.

Prediction

tf.estimator Quickstart
The tasks to be performed are:
1.Load CSVs containing UPDRS training/test data into a TensorFlow Dataset
2.Construct a neural network classifier
3.Train the model using the training data
4.Evaluate the accuracy of the model

The tf.estimator API uses input functions, which create the TensorFlow
operations that generate data for the model. We can use
tf.estimator.inputs.numpy_input_fn to produce the input pipeline
Fit the DNNClassifier to the Training
Data

Pass train_input_fn as the input_fn, and the
number of steps to train (here, 2000):
# Train model.
classifier.train(input_fn=train_input_fn, steps=2000)
The state of the model is preserved in the classifier. It can be train
iteratively if need exists. The above code is equivalent to the following:
classifier.train(input_fn=train_input_fn, steps=1000)
classifier.train(input_fn=train_input_fn, steps=1000)
Evaluate Model Accuracy
Like train, evaluate takes an input function that builds its input pipeline.
evaluate returns a dict with the evaluation results.
Pseudocode
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
from six.moves.urllib.request import urlopen

import numpy as np
import tensorflow as tf

# Data sets
TRAINING = “Training-TotalUPDRS01.csv”

TEST = “Test-TotalUPDRS01.csv”

  # Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
      filename=TRAINING,
      target_dtype=np.int,
      features_dtype=np.float32)
test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
      filename=TEST,
      target_dtype=np.int,
      features_dtype=np.float32)

  # Specify that all features have
real-value data
  feature_columns =
tf.feature_column.numeric_column(“x”, shape=16)

  # Build 3 layer DNN with 10, 20, 10
units respectively.
  classifier =
tf.estimator.DNNClassifier(feature_columns=feature_columns,
                                          hidden_units=10, 20, 10,
                                         
n_classes=2,
                                         
model_dir=”/tmp/pd_model”)
  # Define the training inputs
  train_input_fn =
tf.estimator.inputs.numpy_input_fn(
     
x={“x”: np.array(training_set.data)},
      y=np.array(training_set.target),
      num_epochs=None,
      shuffle=True)

  # Train model.
 
classifier.train(input_fn=train_input_fn, steps=1000)

  # Define the test inputs
  test_input_fn = tf.estimator.inputs.numpy_input_fn(
      x={“x”:
np.array(test_set.data)},
      y=np.array(test_set.target),
      num_epochs=1,
      shuffle=False)
  print (“test input
value=……”)
  print(test_input_fn)
  # Evaluate accuracy.
  accuracy_score = classifier.evaluate(input_fn=test_input_fn)”accuracy”
  print(”
Test Accuracy:
{0:f}
“.format(accuracy_score))

x

Hi!
I'm Clifton!

Would you like to get a custom essay? How about receiving a customized one?

Check it out