-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help me, binary segmentation acc error! #2628
Comments
would you like to provide the full config about dataset setting? |
# dataset settings
dataset_type = 'ManipulationDataset' #change
data_root = '/home/featurize/data/manipulation'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(1280, 640), ratio_range=(0.5, 2.0)),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2560, 640),
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
samples_per_gpu=6,
workers_per_gpu=4,
train=dict(
type=dataset_type,
data_root=data_root,
img_dir='images/training',
ann_dir='annotations/training',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
data_root=data_root,
img_dir='images/validation',
ann_dir='annotations/validation',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
data_root=data_root,
img_dir='images/validation',
ann_dir='annotations/validation',
pipeline=test_pipeline))
optimizer = dict(type='SGD', lr=0.005, momentum=0.9, weight_decay=0.0005) |
Hi @xiaoaxiaoxiaocao, |
@xiexinch, the full config and some example images : |
Did you use the annotations in data_process for training? It seems that all the annotation images are pure black. |
Yes, I used the data of data_process for training. I refer to the drive dataset, and the annotation imgs(data_ori) value divided by 128 is equivalent to '1 if value >= 128 else 0' |
what do you mean by "divided by 128 is equivalent to '1 if value >= 128 else 0'"? Do you mind providing the complete preprocessing code used |
preprocessing code: for i, file in enumerate(files): |
@xiexinch @csatsurnh If I do not do the above data preprocessing, the following error will be reported: 2023-03-05 10:38:24,533 - mmseg - INFO - workflow: [('train', 1)], max: 80000 iters import torch ConvolutionParams terminate called after throwing an instance of 'c10::CUDAError' 已放弃 (核心已转储) |
Hi @xiaoaxiaoxiaocao, |
Why? I refer to the preprocessing of the drive dataset, and its preprocessing was done in this way |
We don't know whether your data is the same as drive, so there is no guarantee that drive's processing will work on your dataset as well. |
Closing the issue, as there is no activity for a while. |
The custom data set (modeled after the drive data set), the categories are foreground and background. Tannotation imgs value divided by 128 is equivalent to '1 if value >= 128 else 0', the training results are as follows, I don't know how to improve,help me, thanks!
My config:
The text was updated successfully, but these errors were encountered: