[Week 12] NPTEL Deep Learning Assignment Answers 2023

Join Our WhatsApp Group Join Now
Join Us On Telegram Join Now

NPTEL Deep Learning Assignment Answers 2023

Deep Learning

NPTEL Deep Learning Week 12 Assignment Answers 2023

Q1.

image 66
Answer:- a

Q2.

image 67
Answer:- a

Q3. When the GAN game has converged to its Nash equilibrium (when the Discriminator randomly makes an error in distinguishing fake samples from real samples), what is the probability (of belongingness to real class) given by the Discriminator to a fake generated sample?
a. 1
b. 0.5
c. 0
d. 0.25

[ihc-hide-content ihc_mb_type=”show” ihc_mb_who=”1,2,3″ ihc_mb_template=”1″ ]

Answer:- b

Q4. Why is re-parameterization trick used in VAE?
a. Without re-parameterization, the mean vector of latent code of VAE encoder with tend towards zero
b. Sampling from a VAE encoder latent space is non-differentiable and thus we
cannot back propagate gradient during optimization using gradient descent
c. We need to re-parameterize Normal distribution over latent space to Bernoulli distribution
d. None of the above

Answer:- b

Q5.

image 68
Answer:- a

Q6. What will happen if we do not enforce KL divergence loss in VAE latent code space?
a. The latent code distribution will be mimic zero mean and unit variance Normal distribution
b. Network will learn to form distinctive clusters with high standard deviation for each cluster
c. Network will learn to form distinctive clusters with low standard deviation for each cluster
d. None of the above

Answer:- c

Q7.

image 69
Answer:- b

Q8. Which one of the following statements is True about Variational Autoencoder (VAE)?
a. VAE can only be applied on monochrome images
b. VAE reconstructions tend to be blurry
c. VAE reconstructions always have high frequency preserving details
d. VAE latent space is designed to be NOT smooth

Answer:- b

Q9.

image 70
Answer:- b

Q10. Which one of the following statements is preferable situation for GAN training?
a. Discriminator loss should monotonically increase
b. Generator loss should monotonically increase
c. Discriminator loss should monotonically go down while the Discriminator loss should monotonically go up
d. Neither of the Discriminator or Generator monotonically loss goes up or down monotonically

Answer:- d

[/ihc-hide-content]

NPTEL Deep Learning Week 11 Assignment Answers 2023

1. What is the use of Skip Connection in image denoising networks?
a. Helping de-convolution layer to recover an improved clean version of image.
b. Back propagating the gradient to bottom layers, which makes the training easy.
c. To create the direct path between convolution layer and the corresponding
mirror de-convolution layer.
d. All of the above.

Answer :- d

2.

a11q2
Answer :- b

3.

a11q3
Answer :- a

4. Fully Connected Convolutional network or FCN became one of the major successful network architectures. Can you identify what are the advantages of FCN which makes it a successful architecture for semantic segmentation?
a. Larger Receptive Field
b. Mixing of global feature
с. Lesser computation required
d. All of the above

Answer :- d 

5. What are the different challenges one face while creating a facial recognition system?
a. Different illumination condition
b. Different pose and orientation of face images
c. Limited dataset for training
d. All of the above

Answer :- d

6.

a11q6
Answer :- a

7.

a11q7
Answer :- c

8. Which of the following is true for semantic segmentation?
a. Semantic Segmentation can be considered as pixel wise classification problem.
b. Semantic Segmented output has same dimension as the input image dimension.
c. It has application in Autonomous driving, Industrial inspection, and Medical
imaging analysis.
d. All of the above

Answer :- d

9.

a11q9
Answer :- a

10. Which of the following operation reduces spatial dimension of features?
a. Max un-Pooling
b. Convolution with 3 x 3 Kernel, Stride=2, Padding all sides = 1
c. Convolution with 3 x 3 Kernel, Stride=1, Padding all sides = 1
d. Transposed convolution

Answer :- c

NPTEL Deep Learning Week 10 Assignment Answers 2023

1.

a10q1
Answer :- a

2.

a10q2
Answer :- a

3.

a10q3
Answer :- c

4. Which among the following is NOT a data augmentation technique?
a. Random horizontal and vertical flip of image
b. Random shuffle all the pixels of an image
c. Random color jittering
d. All the above are data augmentation techniques

Answer :- b

5. Batch Normalization is helpful because

Answer :- a

6. A Batch Norm layer accepts batch of 128D vector. How many parameters of Batch norm get trained via backpropagation during the course of training
a. 256
b. 512
c. 128
d. 1024

Answer :- a

7. Which of the following is a regularization method?
a. Data augmentation
b. Dropout
c. Weight decay
d. All of the above

Answer :- d

8. Which one of the following regularization methods induces sparsity among the trained weights?
a. L regularizer
b. L2 regularizer
c. Both L& La
d. None of the above

Answer :- a

9. How do we generally calculate mean and variance during testing?
a. Batch normalization is not required during testing
b. Mean and variance based on test image
c. Estimated mean and variance statistics during training
d. None of the above

Answer :- c

10.

a10q10
Answer :- c

[/ihc-hide-content]

Course NameDeep Learning
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

NPTEL Deep Learning Week 9 Assignment Answers 2023

1.

a9q1
Answer :- a

2. Which of the following problem is primarily solved by the Residual connection in ResNet?
a. Vanishing Gradient problem
b. Overfitting
c. Underfitting
d. Exploding gradient

Answer :- a

3.

a9q3
Answer :- a

4. Choose the correct option
Statement 1:
Stochastic gradient descent is less prone to getting stuck in local minima because of inherent noise due to minibatch sampling.
Statement 2: Large learning rates with annealing schedule can be used with higher mini-batch size.
a. Statement 1 is True, Statement 2 is True
b. Statement 1 is False, Statement 2 is True
c. Statement 1 is True, Statement 2 is False
d. Statement 1 is False, Statement 2 is False

Answer :- a

5. Which of the following is simplest optimizer, in computational requirement sense, to deal with oscillations and saddle points?
a. Stochastic Gradient Descent (SGD)
b. SGD and Momentum/Nestrovs Accelerated Gradient
c. RMSProp
d. AdaGrad/ Adam

Answer :- b

6

a9q6
Answer :- b

7. For a function f(00,01), if 0o and 01 are initialized at a global minimum, then what should be the values of 0o and 01 after a single iteration of gradient descent?
a. 0o and 01 will update as per gradient descent rule
b. 0o and 01 will remain same
c. Depends on the values of 0o and 01
d. Depends on the learning rate

Answer :- b

8. What can be one of the practical problems of exploding gradient?
a. Too large update of weight values leading to unstable network
Too small update of weight values inhibiting the network to learn
Too large update of weight values leading to faster convergence
d. Too small update of weight values leading to slower convergence

Answer :- a

9.

a9q9
Answer :- b

10. Choose correct statement in regards to GoogleNet?
a. Multiple Auxiliary classifiers are used at different depth levels to avoid vanishing gradient problem
b. Bottleneck Layer in reduces learnable weights
c. Inception module captures information of image at varying resolution
d. All of the above

Answer :- d
Course NameDeep Learning
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

NPTEL Deep Learning Week 8 Assignment Answers 2023

1.

a8q1
Answer :- a

2. An RGB input image has been converted into a matrix of size 257 X 257 X 3 and a kernel/filter of size 7 X 7 X 3 with a stride of 2 and padding = 3 is used for 2D convolution. What will be the size of the output of convolution?
a. 129×129×1
b. 128×128×1
c. 254x254x3
d. 256x256x1

Answer :- a

3. Primary reason for adding pooling layers is?
a. Promote small shift invariance
b. Reduce computations for subsequent layers
c. To produce activations that summarize filter response in local windows.
d. Both b and c

Answer :- d

4.

a8q4
Answer :- b

5. Suppose you have 8 convolutional kernel of size 5 x 5 with no padding and stride 1 in the first layer of a convolutional neural network. You pass an input of dimension 228 × 228 × 3 through this layer. What are the dimensions of the data which the next layer will receive?
a. 224 × 224 × 3
b. 224 × 224 x 8
c. 226 x 226 × 8
d. 225 x 225 x 3

Answer :- b

6. Choose the correct statement in context of transfer learning
a. Higher layers learn task specific features, where as lower layers learn general features
b. Transfer learning is generally used in situations when task specific dataset is very less
c. The weights of lower layers of pretrained CNN (trained on dataset like ImageNet etc) are copied and higher layers are random/gaussian initialized and entire
network is finetuned on smaller dataset
d. All of the above

Answer :- d

7. Advantage of ReLU over Sigmoid and TanH is
a. Low computational requirements
b. Alleviates vanishing gradient to some extent
c. Backpropagation is simpler
d. All of above

Answer :- d

8. Statement 1: For a transfer learning task, lower layers are more generally transferred to another task
Statement 2: For a transfer learning task, last few layers are more generally transferred to another task
Which of the following option is correct?
a. Statement 1 is correct and Statement 2 is incorrect
b. Statement 1 is incorrect and Statement 2 is correct
c. Both Statement 1 and Statement 2 are correct
d. Both Statement 1 and Statement 2 are incorrect

Answer :- a

9.

a8q9
Answer :- b

10. How many convolution layers are there in a LeNet-5 architecture?
a. 2
b. 3
c. 4
d. 5

Answer :- a
Course NameDeep Learning
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

NPTEL Deep Learning Week 7 Assignment Answers 2023

1. Select the correct option.
a. Layer-by-layer autoencoder pretraining reduces GPU/CPU RAM requirements
b. Layer-by-layer autoencoder pretraining alleviates slow convergence
c. Layer-by-layer autoencoder pretraining followed by finetuning converges to more optimal parameters than End-to-End training of autoencoders
d. All of the above

Answer :- d

2. Regularization of Contractive Autoencoder is imposed on
a. Jacobian matrix of encoder activations with respect to the input
b. Weights
c. Inputs
d. Does not use regularization

Answer :- a

3. Select true statements about KL Divergence
a. Measures distance between two probability distribution
b. Has range from 0 to 1
c. Is a symmetric, i.e. KL(P Q) = KL(Q P)
d. None of above

Answer :- d

4.

a7q4
Answer :- c

5.In which conditions, autoencoder has more powerful generalization than Principal Component Analysis (PCA) while performing dimensionality reduction?
a. Undercomplete Linear Autoencoder
b. Overcomplete Linear Autoencoder
c. Undercomplete Non-linear Autoencoder
d. Overcomplete Non-Linear Autoencoder

Answer :- c

6. An autoencoder consists of 128 input neurons, 32 hidden neurons. If the network weights are represented using single precision floating point numbers (size= 4 bytes) then what will be size of weight matrix?
a. 33408 Bytes
b. 16704 Bytes
C. 8352 Bytes
d. 32768 Bvtes

Answer :- a

7. Which of the following is used to match template pattern in a signal
a. Cross Correlation
b. Convolution
c. Normalized cross correlation
d. None of the above

Answer :- c

8. What is the role of sparsity constraint in a sparse autoencoder?
a. Control the number of active nodes in a hidden laver
b. Control the noise level in a hidden layer
c. Control the hidden layer length
d. Not related to sparse autoencoder

Answer :- a

9. Which of the following is true about convolution?
a. Convolution is used to compute features from signal
b. Can be used to compute cross correlation between x (t) and y (t) if input signal (t) is transformed to x(-t) and v (t) is used as filter.
c. Both a and b
d. None of above

Answer :- c

10. Which of the following is an LTI/LSI system? y and x are output and input respectively.
a. Y=mxx+nXx
b. y=mxx + c
c. y=mXx – c
d. y = mXx

Answer :- a
Course NameDeep Learning
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

NPTEL Deep Learning Week 6 Assignment Answers 2023

1. Which of the following is FALSE about PCA and Autoencoders?
a. PCA works well with non-linear data but Autoencoders are best suited for linear data
b. Output of both PCA and Autoencoders is lossy
c. Both PCA and Autoencoders can be used for dimensionality reduction
d. None of the above

Answer :- a

2. Which of the following is not true for PCA? Tick all the options that are correct.
a. Rotates the axes to lie along the principal components
b. Is calculated from the covariance matrix
c. Removes some information from the data
d. Eigenvectors describe the length of the principal components

Answer :- d

3.

a6q3
Answer :- d

4.

a6q4
Answer :- a

5. Suppose a neural network has 3 input nodes, a, b, c. There are 2 neurons, X and F. X = a+ 2b+4c and F = 2X + 1. What is the output F when input (a, b, c) = (-6, 1, 2).
a. 5
b. 4
C. 9
d. 8

Answer :- c

6. Suppose a neural network has 3 input nodes, a, b, c. There are 2 neurons, X and F. X = a+ 2b+4c and F = 2X + 1. What is the gradient of F with respect to a, b and c? Assume, (a, b, c) = (-6, 1, 2).
а. (2,4, 8)
b. (1, 2, 4)
c. (-1, -2, -4)
d. (2, 2, 4)

Answer :- a

7. A single hidden and no-bias autoencoder has 100 input neurons and 10 hidden neurons. What will be the number of parameters associated with this autoencoder?
a. 1000
b. 2000
c. 2110
d. 1010

Answer :- b

8.

a6q8
Answer :- b

9.

a6q9
Answer :- c

10. Which of the following two vectors can form the first two principal components?
a. {2;3; 1} and {3; 1; -9}
b. {24; 1} and {-2; 1; -8}
c. {2;3;1} and {-3; 1; -9?
d. {2;3; -1} and {3; 1; -9}

Answer :- a
Course NameDeep Learning
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

NPTEL Deep Learning Week 5 Assignment Answers 2023

1.

image 45
Answer:- a

2. What is the output of sigmoid function for an input with dynamic range [0, ∞]?
a. [0,1]
b. [-1,1]
c. [0.5, 1]
d. [0.25, 1]

Answer:- c

3.

image 46
Answer:- c

4. Which of the following are potential benefits of using ReLU activation over sigmoid activation?
a. ReLu helps in creating dense (most of the neurons are active) representations
b. ReLu helps in creating sparse (most of the neurons are non-active) representations
c. ReLu helps in mitigating vanishing gradient effect
d. Both (b) and (c)

Answer:- d

5. Suppose a fully-connected neural network has a single hidden layer with 50 nodes. The input is represented by a 5D feature vector and we have a binary classification problem. Calculate the total number of parameters of the network. Consider there are NO bias nodes in the network.
а. 250
b. 120
с. 350
d. 300

Answer:- d

6. A 3-input neuron has weights 1.5, 0.5, 0.5. The transfer function is linear, with the constant of proportionality being equal to 2. The inputs are 6, 20, 4 respectively. The output will be:
a. 40
b. 42
с. 32
d. 12

Answer:- b

7. You want to build a 5-class neural network classifier, given a leaf image, you want to classify which of the 5 leaf breeds it belongs to. Which among the 4 options would be an appropriate loss function to use for this task?
a. Cross Entropy Loss
b. MSE Loss
c. SSIM Loss
d. None of the above

Answer:- a

8.

image 47
Answer:- c

9.

image 48
Answer:- a

10. Suppose a neural network has 3 input 3 nodes, x, y, z. There are 2 neurons, Q and F. Q = x + y and F = Q * z. What is the gradient of F with respect to x, y and z? Assume, (x, Y, z) = (-2, 5, -4).
a. (-4, 3, -3)
b. (-4, -4, 3)
c. (4,4, -3)
d. (3, 3, 4)

Answer:- b

NPTEL Deep Learning Week 4 Assignment Answers 2023

1. Which of the following cannot be realized with single layer perceptron (only input and output layer)?

a. AND
b. OR
C. NAND
d. XOR

Answer :- d

2. For a function f (0o, 01), if 0o and 01 are initialized at a local minimum, then what should be the values of 0o and 01 after a single iteration of gradient descent:

a. 0o and 01 will update as per gradient descent rule
b. 0o and 0, will remain same
c. Depends on the values of 0o and 01
d. Depends on the learning rate

Answer :- b

3. Choose the correct option:
i) Inability of a model to obtain sufficiently low training error is termed as Overfitting
ii) Inability of a model to reduce large margin between training and testing error is termed as Overfitting
iii) Inability of a model to obtain sufficiently low training error is termed as Underfitting
iv) Inability of a model to reduce large margin between training and testing error is termed as Underfitting

a. Only option (i) is correct
b. Both Options (ii) and (ili) are correct
c. Both Options (¡i) and (iv) are correct
d. Only option (iv) is correct

Answer :- b

4.

a4q4
Answer :- a

5. Choose the correct option. Gradient of a continuous and differentiable function is:
i) is zero at a minimum
ii) is non-zero at a maximum
iii) is zero at a saddle point
iv)magnitude decreases as you get closer to the minimum

a. Only option (i) is corerct
b. Options (1), (ili) and (iv) are correct
c. Options (i) and (iv) are correct
d. Only option (ii) is correct

Answer :- b

6. Input to SoftMax activation function is [3,1,2]. What will be the output?

a. [0.58,0.11, 0.31]
b. [0.43,0.24, 0.33]
c. [0.60,0.10,0.301
d. [0.67, 0.09,0.24]

Answer :- d

7.

a4q7
Answer :- d

8. Which of the following options is true?

a. In Stochastic Gradient Descent, a small batch of sample is selected randomly instead of the whole data set for each iteration. Too large update of weight values leading to faster convergence
b. In Stochastic Gradient Descent, the whole data set is processed together for update in each iteration.
c. Stochastic Gradient Descent considers only one sample for updates and has noisier updates.
d. Stochastic Gradient Descent is a non-iterative process

Answer :- c

9. What are the steps for using a gradient descent algorithm?

  1. Calculate error between the actual value and the predicted value
  2. Re-iterate until you find the best weights of network
  3. Pass an input through the network and get values from output layer
  4. Initialize random weight and bias
  5. Go to each neurons which contributes to the error and change its respective values to redu the error

    a. 1, 2, 3, 4, 5
    b. 5, 4, 3, 2, 1
    c. 3, 2, 1, 5, 4
    d. 4, 3, 1, 5, 2
Answer :- d

10.

a4q10
Answer :- d
Course NameDeep Learning
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

NPTEL Deep Learning Week 3 Assignment Answers 2023

1. What is the shape of the loss landscape during optimization of SVM?

a. Linear
b. Paraboloid
c. Ellipsoidal
d. Non-convex with multiple possible local minimum

Answer :-  b. Paraboloid

2. For a 2-class problem what is the minimum possible number of support vectors. Assume there are more than 4 examples from each class

a. 4
b. 1
c. 2
d. 8

Answer :- c. 2

3. Choose the correct option regarding classification using SVM for two classes

Statement i:
While designing an SVM for two classes, the equation y (a*x; + b) ≥ 1 is used to choose
the separating plane using the training vectors.
Statement ii: During inference, for an unknown vector x;, if y;(ax; + b) ≥ 0, then the vector can be
assigned class 1.
Statement iii : During inference, for an unknown vector x;, if (ax; + b) > 0, then the vector can be
assigned class 1.

a. Only Statement i is true
b. Both Statements i and it are true
c. Both Statements i and i are true
d. Both Statements ii and ili are true

Answer :- b. Both Statements i and it are true

4. Find the scalar projection of vector b = <-4, 1> onto vector a = <1,2>?

image 23
Answer :- b

5. The distance of a feature vector [2, 3, -2] from the separating plane X1+ 2X2+ 2x3 + 5 = 0 is given by.
а. 5
b. 5/3
c. 3
d. 13

Answer:- c. 3

6. Suppose we have the below set of points with their respective classes as shown in the table. Answer the following question based on the table.

a3q6
Answer :- c

7. Suppose we have the below set of points with their respective classes as shown in the table. Answer the following question based on the table.

a3q7
Answer :- b

8. Suppose we have the below set of points with their respective classes as shown in the table. Answer the following question based on the table.

a3q8
Answer :- b

9. Suppose we have the below set of points with their respective classes as shown in the table. Answer the following question based on the table.

a3q9
Answer :- c

10. Which one of the following is a valid representation of hinge loss (of margin = 1) for a two-class problem?

y = class label (+1 or -1).
p = predicted (not normalized to denote any probability) value for a class.?


a. L(y, p) = max(0, 1 – yp)
b. L(y, p) = min(0, 1 – yp)
c. L(y, p) = max(0, 1 + yp)
d. None of the above

Answer :- a. L(y, p) = max(0, 1 - yp)
Course NameDeep Learning
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

NPTEL Deep Learning Week 2 Assignment Answers 2023

1. Choose the correct option regarding discriminant functions g(x) for multiclass classification (x is the feature vector to be classified)
Statement i : Risk value R a; x) in Bayes minimum risk classifier can be used as a discriminant function.
Statement ii: Negative of Risk value R (at|×) in Bayes minimum risk classifier can be used as a discriminant function.
Statement iii: Aposteriori probability P(w; x) in Bayes minimum error classifier can be used as a discriminant function. Statement iv : Negative of Aposteriori probability P(w; x) in Bayes minimum error classifier can be used as a discriminant function.


a. Only Statement i is true
b. Both Statements ii and ili are true
c. Both Statements i and iv are true
d. Both Statements i and iv are true

Answer :- b. Both Statements ii and ili are true 

2. Which of the following is regarding functions of discriminant functions gi(x) i.e., f(g(x))

a. We can not use functions of discriminant functions f(g(x)), as discriminant functions for multiclass classification.
b. We can use functions of discriminant functions, f(g(x)), as discriminant functions for multiclass classification provided, they are constant functions i.e., f(g(x)) = C where C is a constant.
c. We can use functions of discriminant functions, f(g(x)), as discriminant functions for multiclass classification provided, they are monotonically increasing functions.
d. None of the above is true.

Answer :- c. We can use functions of discriminant functions, f(g(x)), as discriminant functions for multiclass classification provided, they are monotonically increasing functions. 

3. The class conditional probability density function for the class wi; i.e., P(x| wi) for a multivariate normal (or Gaussian) distribution (where x is a d dimensional feature vector) is given by

image 3
Answer :- a.

4. There are some data points for two different classes given below.
Class 1 points: {(2, 6), (3, 4), (3, 8), (4, 6)}
Class 2 points: {(3, 0), (1, -2), (5, –2), (3, -4)}
Compute the mean vectors μ1 and μ2 for these two classes and choose the correct option.

a. μ1 = [2 6] and μ2 = [3 -1]
b. μ1 = [3 6] and μ2 = [2 -2]
c. μ1 = [3 6] and μ2 = [3 -2]
d. μ1 = [3 5] and μ2 = [2 -3]

Answer :- c

5. There are some data points for two different classes given below.

Class 1 points: {(2, 6), (3, 4), (3, 8), (4, 6)}
Class 2 points: {(3, 0), (1, -2), (5, -2), (3, -4)}

Compute the covariance matrices Σ1 and Σ2 and choose the correct option.

image 4
Answer :- b
Deep Learning Questions and Answers

6. There are some data points for two different classes given below.
Class 1 points: {(2, 6), (3, 4), (3, 8), (4, 6)}
Class 2 points: {(3, 0), (1, -2), (5, -2), (3, -4)}

a2q6
Answer :- b

7. Let  ∑i; represents the covariance matrix for ith class. Assume that the classes have the same co-variance matrix. Also assume that the features are statistically independent and have same co-variance. Which of the following is true?

a. ∑i; = ∑, (diagonal elements of are zero)
b. ∑i; = ∑, (diagonal elements of 2 are non-zero and different from each other, rest of the elements are zero)
C. ∑i; =∑, (diagonal elements of 2 are non-zero and equal to each other, rest of the elements are zero)
d. None of these

Answer :- c

8. The decision surface between two normally distributed class w1 and w2 is shown on the figure. Can you comment which of the following is true?

a2q8
Answer :- c

9.

a2q9
Answer :-  d

10. You are given some data points for two different class.
Class 1 points: {(11, 11), (13, 11), (8, 10), (9, 9), (7, 7), (7, 5), (15, 3)}
Class 2 points: {(7, 11), (15, 9), (15, 7), (13, 5), (14, 4), (9, 3), (11, 3)}
Assume that the points are samples from normal distribution and a two class Bayesian classifier is used to classify them. Also assume the prior probability of the classes are equal i.e.,
P(w1) =P(wz)

Which of the following is true about the corresponding decision boundary used in the classifier? (Choose correct option regarding the given statements)
Statement i: Decision boundary passes through the midpoint of the line segment joining the means of two classes
Statement ii: Decision boundary will be orthogonal bisector of the line joining the means of two classes.

a. Only Statement i is true
b. Only Statement ii is true
c. Both Statement i and i are true
d. None of the statements are true

Answer :- a

NPTEL Deep Learning Week 1 Assignment Answers 2023

1. Signature descriptor of an unknown shape is given in the figure, can you identify the unknown shape?

image
  • a. Circle
  • b. Square
  • c. Straight line
  • d. Rectangle
Answer:- d. Rectangle

2. Signature descriptor of an unknown shape is given in the figure, If d (0) is measured in cm., what is the area of the unknown shape?

image
  • a. 120 sq. cm.
  • c. 240 sq. cm.
  • d. 100 sq. cm.
  • b. 144 sq. cm.
Answer:- c

3. To measure the Smoothness, coarseness and regularity of a region we use which of the transformation to extract feature?

  • Gabor Transformation
  • Wavelet Transformation
  • Both Gabor, and Wavelet Transformation.
  • None of the Above.
Answer:- Both Gabor, and Wavelet Transformation. 

4. Given the 5 x 5 image I (fig 1), we can compute the gray co-occurrence matrix C (fig 2) by specifying the displacement vector d = (dx, dy). Let the position operator be specified as (1, 1), which has the interpretation: one pixel to the right and one pixel below. (Both the image and the partial gray co-occurrence is given in the figure 1, and 2 respectively. Blank values and ‘x’ value in gray co-occurrence matrix are unknown.)

image 1
image 2

What is the value of ‘x’?

  • a. 0
  • b. 1
  • c. 2
  • d. 3
Answer:- a

5. Given the 5 x 5 image I (fig 1), we can compute the gray co-occurrence matrix by specifying the displacement vector d = (dx, dy). Let the position operator be specified as (1, 1), which has the interpretation: one pixel to the right and one pixel below. What is the value of maximum probability descriptor?

  • a. 3/16
  • b. 1/4
  • c. 3/12
  • d. 1/3
Answer:- b

6. Which of the following is a region descriptor?

  • a. Polygonal Representation
  • b. Fourier descriptor
  • c. Signature
  • d. Intensity histogram.
Answer:- d. Intensity histogram.

7. We use gray co-occurrence matrix to extract which type of information?

  • a. Boundary
  • b. Texture
  • c. MFCC
  • d. Zero Crossing rate.
Answer:- b. Texture

8. A single card is drawn from a standard deck of playing cards. What is the probability of that a heart is drawn or a 5? (Hints: A standard deck of 52 cards has 4 suits namely heart, spades, diamonds and clubs)

  • a. 3/13
  • b. 4/13
  • c. 17/52
  • d. 19/52
Answer:- b. 4/13

9. which of following is strictly true for a two-class problem Bayes minimum error classifier? (The two different classes are w1 and w2, and input feature vector is x)

  • a. Choose w1 if P(x/wi) > P(x/w2)
  • b. Choose w1 if P(w1)>P(w2)
  • c. Choose w2 if P(w1/x)>P(w2/x)
  • d. Choose w1 if P(w1/x)>P(w2/x)

Answer:- d

10. Consider two class Bayes’ Minimum Risk Classifier. Probability of classes W1 and W2 are, P (w1) =0.2 and P (w2) =0.8 respectively. P (x| w1) = 0.75, P (x| w2) = 0.5 and the loss matrix values are

Answer:- d
Course NameDeep Learning
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top