Commit 6a9e3ca9 authored by David Maxence's avatar David Maxence
Browse files

maj de loading et entrainement sans fold ~ 85% sur test(sur6000 data)

parent d0414aa7
This diff is collapsed.
This diff is collapsed.
%% Cell type:code id:israeli-hometown tags:
``` python
import numpy as np
import os
import pandas as pd
from scipy.io import wavfile
import matplotlib.pyplot as plt
```
%% Cell type:code id:waiting-today tags:
``` python
samplerate, data = wavfile.read('./7061-6-0-0.wav')
samplerate, data = wavfile.read('../UrbanSound8K/audio/fold1/7061-6-0-0.wav')
```
%% Cell type:code id:computational-europe tags:
``` python
import leaf_audio.frontend as frontend
leaf = frontend.Leaf()
melfbanks = frontend.MelFilterbanks()
tfbanks = frontend.TimeDomainFilterbanks()
sincnet = frontend.SincNet()
sincnet_plus = frontend.SincNetPlus()
```
%% Output
WARNING:absl:Lingvo does not support eager execution yet. Please disable eager execution with tf.compat.v1.disable_eager_execution() or proceed at your own risk.
%% Cell type:code id:disciplinary-brain tags:
``` python
type(data)
```
%% Output
numpy.ndarray
%% Cell type:code id:compound-diesel tags:
``` python
data = data.astype('float')
```
%% Cell type:code id:sustained-default tags:
``` python
data.dtype
```
%% Output
dtype('float64')
%% Cell type:code id:developmental-lounge tags:
``` python
leaf_representation = leaf(data)
melfbanks_representation = melfbanks(data)
```
%% Output
WARNING:tensorflow:Layer leaf is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because its dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
WARNING:tensorflow:Layer leaf is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because its dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
WARNING:tensorflow:Layer mel_filterbanks_1 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because its dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
WARNING:tensorflow:Layer mel_filterbanks_1 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because its dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
%% Cell type:code id:fuzzy-helmet tags:
``` python
type(melfbanks_representation)
```
%% Output
tensorflow.python.framework.ops.EagerTensor
%% Cell type:code id:funny-emperor tags:
``` python
leaf_representation
t= melfbanks_representation.numpy
```
%% Cell type:code id:raising-stephen tags:
``` python
t
```
%% Output
<bound method _EagerTensorBase.numpy of <tf.Tensor: shape=(99225, 1, 40), dtype=float32, numpy=
array([[[-11.512925, -11.512925, -11.512925, ..., -11.512925,
-11.512925, -11.512925]],
[[-11.512925, -11.512925, -11.512925, ..., -11.512925,
-11.512925, -11.512925]],
[[-11.512925, -11.512925, -11.512925, ..., -11.512925,
-11.512925, -11.512925]],
...,
[[-11.512925, -11.512925, -11.512925, ..., -11.512925,
-11.512925, -11.512925]],
[[-11.512393, -11.512265, -11.512299, ..., -11.507828,
-11.507522, -11.507188]],
[[-11.512925, -11.512925, -11.512925, ..., -11.512925,
-11.512925, -11.512925]]], dtype=float32)>>
%% Cell type:code id:dressed-present tags:
``` python
leaf_representation
```
%% Output
<tf.Tensor: shape=(99225, 1, 40), dtype=float32, numpy=
array([[[0.20780897, 0.20780897, 0.20780897, ..., 0.20780897,
0.20780897, 0.20780897]],
[[0.20780897, 0.20780897, 0.20780897, ..., 0.20780897,
0.20780897, 0.20780897]],
[[0.20780897, 0.20780897, 0.20780897, ..., 0.20780897,
0.20780897, 0.20780897]],
...,
[[0.21767402, 0.2243576 , 0.2243576 , ..., 0.2582096 ,
0.26002145, 0.26086974]],
[[0.22921395, 0.23622298, 0.23621511, ..., 0.24876428,
0.2451222 , 0.23672509]],
[[0.20780897, 0.20780897, 0.20780897, ..., 0.20780897,
0.20780897, 0.20780897]]], dtype=float32)>
%% Cell type:code id:painful-location tags:
``` python
X = np.stack(leaf_representation.numpy())
```
%% Cell type:code id:alike-enhancement tags:
``` python
X[1]
```
%% Output
array([[0.20780897, 0.20780897, 0.20780897, 0.20780897, 0.20780897,
0.20780897, 0.20780897, 0.20780897, 0.20780897, 0.20780897,
0.20780897, 0.20780897, 0.20780897, 0.20780897, 0.20780897,
0.20780897, 0.20780897, 0.20780897, 0.20780897, 0.20780897,
0.20780897, 0.20780897, 0.20780897, 0.20780897, 0.20780897,
0.20780897, 0.20780897, 0.20780897, 0.20780897, 0.20780897,
0.20780897, 0.20780897, 0.20780897, 0.20780897, 0.20780897,
0.20780897, 0.20780897, 0.20780897, 0.20780897, 0.20780897]],
dtype=float32)
%% Cell type:code id:terminal-tulsa tags:
``` python
X.shape
X_dim = (128,128,1)
```
%% Cell type:code id:associate-conservative tags:
``` python
X = X.reshape( X_dim)
```
%% Output
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-25-d4a67893f50d> in <module>
----> 1 X = X.reshape( X_dim)
ValueError: cannot reshape array of size 3969000 into shape (128,128)
%% Cell type:code id:taken-kentucky tags:
``` python
```
......
This diff is collapsed.
model_checkpoint_path: "."
all_model_checkpoint_paths: "."
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment