Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In computational research, GPU acceleration has become indispensable, especially for handling complex tasks like deep learning. The script we present below provides an illustrative example of how deep learning computations can be offloaded CoESRA’s GPU node gives researcher’s the opportunity to access that computational power remotely.

...

The CoESRA virtual desktop can process deep learning computations assigned to GPUs for faster processing on the CoESRA virtual desktop. This script example first creates a synthetic dataset and , then utilises it to train a Deep Neural Network (DNN), showcasing the . The computational demands and GPU usage typical of real-world applicationsis visually showcased via the nvtop task monitor.

...

This is the script used in the above example.

Code Block
breakoutModewide
languagepy
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout

# Check the availability of GPU
if tf.config.experimental.list_physical_devices('GPU'):
    print("GPU is available")
else:
    print("GPU not available, using CPU")

# Generate a synthetic dataset for demonstration purposes
X = np.random.rand(300000000, 10)  # Input data with 10 features
y = np.random.rand(300000000, 1)   # Corresponding output data

# Initialize a Sequential DNN model
model = Sequential()

# Input layer: this layer has 20 neurons (units of computation) 
# and receives input data with 10 features
model.add(Dense(20, input_dim=10, activation='relu'))  

# Hidden layers: layers that process the data between input and output
# Each of these layers has 4096 neurons
model.add(Dense(4096, activation='relu')) 
model.add(Dropout(0.5))  # Introducing some randomness by "dropping out" some neurons to prevent overfitting
model.add(Dense(4096, activation='relu')) 
model.add(Dense(4096, activation='relu'))
model.add(Dense(4096, activation='relu'))

# Output layer: produces the final prediction. Only 1 neuron as our synthetic 
# data has a single output value per sample
model.add(Dense(1, activation='linear'))

# Set up the model for training. Here:
# - 'mean_squared_error' is the criteria to measure how well the model is performing (lower is better).
# - 'adam' is a type of optimisation algorithm that adjusts neuron weights to improve the model's predictions.
model.compile(loss='mean_squared_error', optimizer='adam')

# Train the model using the synthetic data. 
# The model will process the data in batches of 1024 samples at a time, for 1 full cycle (epoch).
model.fit(X, y, epochs=1, batch_size=1024)

print("Training complete!")

Computational Insights and Real-World Relevance:

...