-
-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Raid shows hostname #1039
Comments
Unfortunately, I don't think this is a bug, this is just what your system reports. But there definitely should be a flag to change the raid name, if reported incorrectly. I will make sure to add something for that. |
That would be great thanks |
Here is the output. It does show Raid 1 and Raid 0 but I am thinking its pickup up the name from the lvm partition. So an override would be the best solution. Output:const disks = [
{
device: '/dev/nvme0n1',
type: 'NVMe',
name: 'INTEL SSDPE2MX800G4J 118000562',
vendor: 'INTEL',
size: 800166076416,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: 'CVPD736600XR8005',
interfaceType: 'PCIe',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/nvme1n1',
type: 'NVMe',
name: 'INTEL SSDPE2MX800G4M 118000178',
vendor: 'INTEL',
size: 800166076416,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: 'CVPD7096002A800U',
interfaceType: 'PCIe',
smartStatus: 'unknown',
temperature: null
}
]
const sizes = [
{
fs: '/dev/mapper/47--ds--u28--vg-root',
type: 'ext4',
size: 1566657900544,
used: 779730161664,
available: 707270356992,
use: 52.44,
mount: '/',
rw: true
},
{
fs: '/dev/md0',
type: 'ext4',
size: 987009024,
used: 98508800,
available: 820604928,
use: 10.72,
mount: '/boot',
rw: true
},
{
fs: '/dev/nvme0n1p2',
type: 'vfat',
size: 549412864,
used: 6115328,
available: 543297536,
use: 1.11,
mount: '/boot/efi',
rw: true
}
]
/bin/sh: 1: mdadm: not found
const blocks = [
{
name: 'nvme0n1',
type: 'disk',
fsType: '',
mount: '',
size: 800166076416,
physical: 'SSD',
uuid: '',
label: '',
model: 'INTEL SSDPE2MX800G4J 118000562',
serial: 'CVPD736600XR8005',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
},
{
name: 'nvme1n1',
type: 'disk',
fsType: '',
mount: '',
size: 800166076416,
physical: 'SSD',
uuid: '',
label: '',
model: 'INTEL SSDPE2MX800G4M 118000178',
serial: 'CVPD7096002A800U',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme1n1'
},
{
name: 'loop0',
type: 'loop',
fsType: '',
mount: '/snap/certbot/3566',
size: 47165440,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop1',
type: 'loop',
fsType: '',
mount: '/snap/core20/2105',
size: 67014656,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop2',
type: 'loop',
fsType: '',
mount: '/snap/certbot-dns-cloudflare/3182',
size: 9719808,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop3',
type: 'loop',
fsType: '',
mount: '/snap/snapd/20671',
size: 42393600,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: '47--ds--u28--vg-root',
type: 'lvm',
fsType: 'ext4',
mount: '/',
size: 1592812109824,
physical: '',
uuid: '9a1f347f-60d7-4aef-af5a-1aa815cfbed2',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: '47--ds--u28--vg-swap',
type: 'lvm',
fsType: 'swap',
mount: '[SWAP]',
size: 4093640704,
physical: '',
uuid: 'f8264ab0-7db0-42c8-84d9-29c44990231a',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'nvme0n1p1',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 1023410176,
physical: '',
uuid: '6f484665-b26f-15a5-6812-0557164bbd88',
label: '47-ds-u28:0',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
},
{
name: 'nvme0n1p2',
type: 'part',
fsType: 'vfat',
mount: '/boot/efi',
size: 550502400,
physical: '',
uuid: '2E1A-3AA0',
label: '',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
},
{
name: 'nvme0n1p3',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 798590238720,
physical: '',
uuid: '4a512e08-bf38-bc95-af81-ef51172e6375',
label: '47-ds-u28:1',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
},
{
name: 'nvme1n1p1',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 1023410176,
physical: '',
uuid: '6f484665-b26f-15a5-6812-0557164bbd88',
label: '47-ds-u28:0',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme1n1'
},
{
name: 'nvme1n1p2',
type: 'part',
fsType: 'vfat',
mount: '',
size: 550502400,
physical: '',
uuid: '2E1A-E4DA',
label: '',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme1n1'
},
{
name: 'nvme1n1p3',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 798590238720,
physical: '',
uuid: '4a512e08-bf38-bc95-af81-ef51172e6375',
label: '47-ds-u28:1',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme1n1'
},
{
name: 'md1',
type: 'raid0',
fsType: 'LVM2_member',
mount: '',
size: 1596909944832,
physical: '',
uuid: '73iiVc-LAl9-4e3v-Lr3h-1Zv1-wOs8-S6Rpak',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'md0',
type: 'raid1',
fsType: 'ext4',
mount: '/boot',
size: 1022361600,
physical: '',
uuid: 'd94564cb-8af1-4d8e-a1a4-f6f9003fc53d',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
}
] |
Reinstalled debian and now raid is showing up correctly |
Sorry didn't mean to close it. Still would be nice to custom edit this. I would like to add raid 0 and raid 5 to the name |
Description of the bug
In the disk section it shows Raid and then my hostname. I do not think that was intentional so likely a bug. I don't see a way to override it to say Raid 0.
This is using source code for building and running
How to reproduce
No response
Relevant log output
No response
Info output of dashdot cli
No response
What browsers are you seeing the problem on?
Chrome
Where is your instance running?
Linux Server
Additional context
No response
The text was updated successfully, but these errors were encountered: