Portál AbcLinuxu, 5. května 2025 15:17

Dotaz: NVMe na SW RAID?

24.5.2015 18:11 alkoholik | skóre: 40 | blog: Alkoholik
NVMe na SW RAID?
Přečteno: 392×
Odpovědět | Admin
Je to dotaz napul sem a napul do databazove poradny.
Chystam presun Oracle databaze z EMC SAN na interni NVMe disky a premyslim, jestli je predhodit ASM jako dva disky s normal redundanci nebo nad nimi spachat SW mirror.
Hledam plusy a minusy. V tuhle chvili mam datagrupu s external redundanci a SW mirror mi akorat umozni presun dat za behu.
Nenapadaji tu nekoho zasadnejsi argumenty pro jedno z reseni?
Nástroje: Začni sledovat (0) ?Zašle upozornění na váš email při vložení nového komentáře.

Odpovědi

25.5.2015 12:45 NN
Rozbalit Rozbalit vše Re: NVMe na SW RAID?
Odpovědět | | Sbalit | Link | Blokovat | Admin
Mozna budu za idiota, ale z wiki a debaty jsem vyvodi, ze external = nic(stripe,0), normal,high = raid 1 .. to cele v podani Oracle software a dotaz je, z hledas optimalni performance na novem zeleze, jakem?
25.5.2015 13:22 alkoholik | skóre: 40 | blog: Alkoholik
Rozbalit Rozbalit vše Re: NVMe na SW RAID?
Slo mi o to, jestli je lepsi provozovat "RAID1" v podani Oracle nebo linuxoveho SW RAIDu.
Nicmene byvaly kolega, co ted dela na jejich supportu, mi odpovedel:
<on>hele ja snad jeste nevidel asm ktery by nepouzivalo externi redundanci
me ta redundance od toho oraclu prijde takova... no... proc ji pouzivat kdyz to muze udelat os aniz by se to vysralo....
<ja>ocividne mas bezbrehou duveru ve vase technologie
<on>prave ze mam :-D
navic nevim proc radit neco do ceho se vyrobce sam nepousti
12.6.2015 16:44 alkoholik | skóre: 40 | blog: Alkoholik
Rozbalit Rozbalit vše Re: NVMe na SW RAID?
Odpovědět | | Sbalit | Link | Blokovat | Admin
Kdyby to nekoho zajimalo..
ASM na SW RAID z NVMe pod OL7 konci na:
[71650.216149] BUG: unable to handle kernel NULL pointer dereference at 0000000000000088
[71650.216177] IP: [<ffffffffa021b02f>] raid1_mergeable_bvec+0x2f/0xf0 [raid1]
[71650.216202] PGD 8514c7067 PUD 85086e067 PMD 0 
[71650.216226] Oops: 0000 [#1] SMP 
[71650.216241] Modules linked in: qla2xxx scsi_transport_fc scsi_tgt iptable_filter ip_tables oracleasm bonding coretemp mperf freq_table intel_powerclamp kvm_intel kvm crc32c_intel ghash_clmulni_intel cryptd microcode pcspkr ioatdma shpchp wmi ipmi_devintf ipmi_si ipmi_msghandler acpi_pad ext4 mbcache jbd2 raid1 sd_mod crc_t10dif ixgbe ahci xhci_hcd ptp libahci pps_core nvme dca hwmon ipv6 autofs4 [last unloaded: scsi_tgt]
[71650.216479] CPU 10 
[71650.216487] Pid: 6686, comm: oracle Tainted: G        W    3.8.13-68.3.1.el7uek.x86_64 #2 Supermicro Super Server/X10DRU-i+
[71650.216513] RIP: 0010:[<ffffffffa021b02f>]  [<ffffffffa021b02f>] raid1_mergeable_bvec+0x2f/0xf0 [raid1]
[71650.216540] RSP: 0018:ffff880417c79ae0  EFLAGS: 00010282
[71650.216552] RAX: ffff880853ab6800 RBX: ffff8808512f1180 RCX: 0000000000000200
[71650.216567] RDX: 0000000000000000 RSI: ffff880417c79b38 RDI: 0000000000000000
[71650.216581] RBP: ffff880417c79b18 R08: 0000000000000e00 R09: 0000000000000100
[71650.216598] R10: 0000000000000002 R11: 0000000000000000 R12: ffff8808512f1208
[71650.216892] R13: ffff880417c79b38 R14: 0000000000000200 R15: 0000000000000200
[71650.217185] FS:  00007fbd53ada740(0000) GS:ffff88087fd40000(0000) knlGS:0000000000000000
[71650.217479] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[71650.217768] CR2: 0000000000000088 CR3: 000000044d4e8000 CR4: 00000000001407e0
[71650.218060] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[71650.218352] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[71650.218643] Process oracle (pid: 6686, threadinfo ffff880417c78000, task ffff88043b616400)
[71650.218936] Stack:
[71650.219215]  ffff880417c79b54 ffff88044c5cedc0 ffff8808512f1180 ffff8808512f1208
[71650.219521]  ffff88044ec11038 0000000000000000 0000000000000200 ffff880417c79b80
[71650.219825]  ffffffff811b8b74 00000000000000d0 0000000000000000 0000000000000000
[71650.220131] Call Trace:
[71650.220421]  [<ffffffff811b8b74>] __bio_add_page+0xf4/0x250
[71650.220712]  [<ffffffff811ba306>] bio_map_user_iov+0x226/0x360
[71650.221002]  [<ffffffff811ba465>] bio_map_user+0x25/0x30
[71650.221294]  [<ffffffffa0d320a8>] asm_submit_io+0x5c8/0xaf0 [oracleasm]
[71650.221587]  [<ffffffff8127750e>] ? radix_tree_lookup_slot+0xe/0x10
[71650.221881]  [<ffffffff8112503e>] ? find_get_page+0x1e/0xa0
[71650.222172]  [<ffffffffa0d329d2>] asm_submit_io_native+0x92/0x1b0 [oracleasm]
[71650.222467]  [<ffffffff81045929>] ? default_spin_lock_flags+0x9/0x10
[71650.222761]  [<ffffffffa0d3502f>] asm_do_io+0x5cf/0xad0 [oracleasm]
[71650.223054]  [<ffffffffa0d2f270>] ? asmfs_put_super+0x20/0x20 [oracleasm]
[71650.223347]  [<ffffffffa0d356c1>] asmfs_svc_io64+0x191/0x290 [oracleasm]
[71650.223640]  [<ffffffffa0d35aca>] asmfs_file_read+0x7a/0x130 [oracleasm]
[71650.223937]  [<ffffffff81183fe3>] vfs_read+0xa3/0x180
[71650.224228]  [<ffffffff81184289>] sys_read+0x49/0xa0
[71650.224519]  [<ffffffff8157d099>] system_call_fastpath+0x16/0x1b
[71650.224808] Code: 00 55 48 89 e5 41 57 41 56 41 55 49 89 f5 41 54 49 89 d4 53 48 83 ec 10 48 8b 16 48 8b 87 e8 03 00 00 48 8b 7e 08 45 8b 74 24 08 <48> 8b 92 88 00 00 00 4c 8b 38 8b 80 20 02 00 00 48 89 7d d0 48 
[71650.225918] RIP  [<ffffffffa021b02f>] raid1_mergeable_bvec+0x2f/0xf0 [raid1]
[71650.226215]  RSP <ffff880417c79ae0>
[71650.226500] CR2: 0000000000000088
Takze jdu do ASM datagroupy s external redundancy.
Je zajimave, ze na podobnem serveru SW RAID z obycejnych SATA SSD bez problemu slape.
17.6.2015 15:13 alkoholik | skóre: 40 | blog: Alkoholik
Rozbalit Rozbalit vše Re: NVMe na SW RAID?
OK, spravne reseni je nastavit +ASM.asm_diskstring='/dev/oracleasm/disks/*' v init+ASM.ora, aby se vypnulo scanovani disku ASM instanci.

Založit nové vláknoNahoru

Tiskni Sdílej: Linkuj Jaggni to Vybrali.sme.sk Google Del.icio.us Facebook

ISSN 1214-1267, (c) 1999-2007 Stickfish s.r.o.