Devadesátková hra Brány Skeldalu prošla portací a je dostupná na platformě Steam. Vyšel i parádní blog autora o portaci na moderní systémy a platformy včetně Linuxu.
Lidi dělají divné věci. Například spouští Linux v Excelu. Využít je emulátor RISC-V mini-rv32ima sestavený jako knihovna DLL, která je volaná z makra VBA (Visual Basic for Applications).
Revolut nabídne neomezený mobilní tarif za 12,50 eur (312 Kč). Aktuálně startuje ve Velké Británii a Německu.
Společnost Amazon miliardáře Jeffa Bezose vypustila na oběžnou dráhu první várku družic svého projektu Kuiper, který má z vesmíru poskytovat vysokorychlostní internetové připojení po celém světě a snažit se konkurovat nyní dominantnímu Starlinku nejbohatšího muže planety Elona Muska.
Poslední aktualizací začal model GPT-4o uživatelům příliš podlézat. OpenAI jej tak vrátila k předchozí verzi.
Google Chrome 136 byl prohlášen za stabilní. Nejnovější stabilní verze 136.0.7103.59 přináší řadu novinek z hlediska uživatelů i vývojářů. Podrobný přehled v poznámkách k vydání. Opraveno bylo 8 bezpečnostních chyb. Vylepšeny byly také nástroje pro vývojáře.
Homebrew (Wikipedie), správce balíčků pro macOS a od verze 2.0.0 také pro Linux, byl vydán ve verzi 4.5.0. Na stránce Homebrew Formulae lze procházet seznamem balíčků. K dispozici jsou také různé statistiky.
Byl vydán Mozilla Firefox 138.0. Přehled novinek v poznámkách k vydání a poznámkách k vydání pro vývojáře. Řešeny jsou rovněž bezpečnostní chyby. Nový Firefox 138 je již k dispozici také na Flathubu a Snapcraftu.
Šestnáctý ročník ne-konference jOpenSpace se koná 3. – 5. října 2025 v Hotelu Antoň v Telči. Pro účast je potřeba vyplnit registrační formulář. Ne-konference neznamená, že se organizátorům nechce připravovat program, ale naopak dává prostor všem pozvaným, aby si program sami složili z toho nejzajímavějšího, čím se v poslední době zabývají nebo co je oslovilo. Obsah, který vytvářejí všichni účastníci, se skládá z desetiminutových
… více »Nasledovný príspevok je študentskou prácou, ktorá vznikla v rámci predmetu Advanced Topics of Linux Administration. Predmet je vypisovaný na Fakulte informatiky MU v spolupráci so spoločnosťou Red Hat Czech. Vyučovacím jazykom je angličtina a preto je v nej aj nasledovný príspevok.
Lustre is a new generation object based distributed file system. It's heavily used in the world's super computers (entire half of the top 30 super computers uses Lustre for its file storage); mainly for its enormous speed and scalability. The term Lustre is a mixture of the words Linux and cluster. Lustre is known to sustain thousands of nodes and petabytes of disk space while maintaining the speed, security and high availability. Oh, and did I forgot to say it's developed under the GNU GPL? ;).
Lustre started with the Cluster File Systems, Inc. company founded by Dr. Peter Braam in the year 2001. Cluster File Systems, Inc. held its offices in the United States of America, Canada and even China. Cluster File Systems, Inc.'s clients include such famous companies as Hewlett-Packard or Cray and famous super computer laboratories as Oak Ridge National Laboratory or Los Alamos National Laboratory. On September 12, 2007 Sun Microsystems, Inc. and Cluster File Systems, Inc. signed an acquisition agreement (Sun Microsystems, Inc. were to acquire Cluster File Systems, Inc.). The acquisition was completed on October 1, 2007.
Lustre composes of three main units:
Theoretically it is possible to have all the three units on one machine, but who would want something like this? ;). Typically all the units are spread on different nodes using from two to four OSTs per OSS in the Lustre system; and all the nodes should be dedicated. Lustre system can run on various network types — including TCP/IP, Infiniband and other proprietary systems. Lustre system can also use remote direct memory access (RDMA) transfers to improve the bandwidth and to reduce the CPU usage.
The OSSs' storage is usually partitioned and organized by Logical Volume Manager and/or backed by RAID. The storage is formatted as a Lustre file system which is used by the clients. Internally Lustre still uses EXT3 (and plans on using ZFS in the future) to store meta data.
Access to a file from Lustre client is handled by the Lustre system in these steps:
Accidentally the Lustre clients do not modify the data directly by themselves, but delegate these tasks to the OSSes. Thanks to this the scalability is ensured and the security and reliability is improved as well.
On a Linux client Lustre can be either a user-space library or a kernel module. In the beginning there used to be only the kernel module and a typical Lustre installation would use it to mount a Lustre file system as any other file system. The client applications see the file system as a single unified file system (hence there may be thousands of nodes comprising this one).
From the year 2008 Lustre system may also use the user-space library liblustre that allows not only the same features as the kernel module, but also enables the node to see the Lustre file system, even though the node is not properly configured as a Lustre client. Liblustre allows to modify or read data directly from the OSSes. This approach does not require the data copy to go through the kernel (providing low latency and high bandwidth).
Normally in a Linux file system, an inode contains all the basic information about each file (e. g. where its data is contained). The Lustre file system uses inodes as well, but inodes on MDSes point to the objects associated with the file (not to the data blocks). These objects are backed by single files on the OSSes. When a Lustre client tries to open a file, the file open operation does a transfer of set of object pointers from the MDS to the Lustre client, thus the Lustre client can interact with the OSS node where the object is stored directly.
There is a special case in which where only one object is associated with and MDS inode. In that case that object contains all the data. When more objects are associated with a file, the data in the file is striped across all the objects (this feature is similar to the RAID 0 and in Lustre it is possible to stripe a file in up to 160 parts). The striping brings high performance and when the striping is used the maximum file size is not limited by the size of a single target, but is aggregated by the number of OSTs.
The network between the Lustre nodes is implemented over the Lustre Networking (aka LNET) API that provides the infrastructure for the Lustre file system.
Lustre Networking API supports many network types including TCP/IP and Infiniband. It can contain such features as transparent recovery with failover servers.
Lustre Networking API provides end-to-end bandwidth over 100 MBps on 1 Gigabit ethernet links, up to 1.5 GBps on Infiniband and over 1 GBps across 10 Gigabit ethernet links.
Lustre's high availability features include failover and recovery mechanisms allowing server failures and reboots or shutdowns transparent. Version compatibility between Lustre minor versions allows the system administrator to shutdown Lustre on one server, upgrade or repair it, and then restart it.
While current network file systems like the Global File System need to deliver the same storage to all the participating server nodes, Lustre allows one to aggregate the storage available to their servers with high performance gain and availability. On the other hand if they do not have enough failover mechanisms like redundant power supply or use RAID the outcomes could be catastrophic.
Administering Lustre system is really simple. The first thing you need to do is to install the Lustre packages. These can be downloaded from Sun Microsystems' website or in the case of GNU/Debian distribution they are part of the official repositories.
For this example I am using five machines — MDS (named mds.example.com), OSS (named oss1.example.com), OSS (named oss2.example.com) and Lustre Client (named client.example.com). For the connection I am using ethernet.
On the MDS and OSS we need to install the
On the Lustre Client we need to install the
Add this line to /etc/modprobe.conf on all the machines:
options lnet networks=tcp
That tells Lustre to use the TCP/IP networking.
On the mds.example.com create the MDT device (I use RAID device for that.):
mkfs -t lustre --fsname=MyFirstLustreFS --mgs --mdt /dev/md0
And now mount it, so that it can be used:
mount -t lustre /dev/md0 /mnt/mdt
On the oss1.example.com create let's say two OST devices (I also use RAID here for those.):
mkfs -t lustre --ost --fsname=MyFirstLustreFS --mgsnode=mds.example.com@tcp0 /dev/md0
mkfs -t lustre --ost --fsname=MyFirstLustreFS --mgsnode=mds.example.com@tcp0 /dev/md1
And mount it, so that it can be used:
mount -t lustre /dev/md0 /mnt/ost1
mount -t lustre /dev/md1 /mnt/ost2
And on the oss2.example.com create let's say one OST device (I also use RAID here for that.):
mkfs -t lustre --ost --fsname=MyFirstLustreFS --mgsnode=mds.example.com@tcp0 /dev/md0
And mount it, so that it can be used:
mount -t lustre /dev/md0 /mnt/ost3
Mount the Lustre file system from the client.example.com:
mount -t lustre mgs.example.com:/MyFirstLustreFS /dev/MyFirstLustreFS
If you want to have more clients or OSTs, just use the same procedures again.
Tiskni
Sdílej:
Domníval jsem se, že objektový souborový systém umožňuje věci, jako je strukturovat obsah „souboru“ a proces pak může číst nebo zapisovat jednotlivé části struktury, může vyhledávat části všech souborů, které odpovídají kritériu apod. (Něco jako dnes umí relační/objektové databázové systémy.)
Končí schopnosti Lustre rozhodit obsah souboru do více geograficky vzdálených bloků – objektů, nebo kromě posixového rozhraní nabízí Lustre skutečně objektové služby?