RedHat: RHSA-2018-2261:01 Moderate: Red Hat Ceph Storage 2.5 security,

    Date26 Jul 2018
    CategoryRed Hat
    1740
    Posted ByAnthony Pell
    An update for ceph is now available for Red Hat Ceph Storage 2.5 for Red Hat Enterprise Linux 7. Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA256
    
    =====================================================================
                       Red Hat Security Advisory (PDC)
    
    Synopsis:          Moderate: Red Hat Ceph Storage 2.5 security, enhancement, and bug fix update
    Advisory ID:       RHSA-2018:2261-01
    Product:           Red Hat Ceph Storage
    Advisory URL:      https://access.redhat.com/errata/RHSA-2018:2261
    Issue date:        2018-07-26
    CVE Names:         CVE-2018-1128 CVE-2018-1129 CVE-2018-10861 
    =====================================================================
    
    1. Summary:
    
    An update for ceph is now available for Red Hat Ceph Storage 2.5 for Red
    Hat Enterprise Linux 7.
    
    Red Hat Product Security has rated this update as having a security impact
    of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which
    gives a detailed severity rating, is available for each vulnerability from
    the CVE link(s) in the References section.
    
    2. Relevant releases/architectures:
    
    Red Hat Ceph Storage 2.5 Client-Tools - noarch, x86_64
    Red Hat Ceph Storage 2.5 ComputeNode-Tools - noarch, x86_64
    Red Hat Ceph Storage 2.5 MON - x86_64
    Red Hat Ceph Storage 2.5 OSD - x86_64
    Red Hat Ceph Storage 2.5 Server-Tools - noarch, x86_64
    Red Hat Ceph Storage 2.5 Workstation-Tools - noarch, x86_64
    
    3. Description:
    
    Red Hat Ceph Storage is a scalable, open, software-defined storage platform
    that combines the most stable version of the Ceph storage system with a
    Ceph management platform, deployment utilities, and support services.
    
    Security Fix(es):
    
    * ceph: cephx protocol is vulnerable to replay attack (CVE-2018-1128)
    
    * ceph: cephx uses weak signatures (CVE-2018-1129)
    
    * ceph: ceph-mon does not perform authorization on OSD pool ops
    (CVE-2018-10861)
    
    For more details about the security issue(s), including the impact, a CVSS
    score and other related information refer to the CVE page(s) listed in the
    Reference section.
    
    Enhancement(s):
    
    * Ceph OSDs now logs when they shutdown due to disk operations timing out
    by default. (BZ#1568897)
    
    * The `radosgw-admin orphans find` command can inadvertently remove data
    objects still in use, if followed by another operation, such as, a `rados
    rm` command. Users are now warned before attempting to produce lists of
    potentially orphaned objects. (BZ#1573656)
    
    * The 'ceph-osdomap-tool' now has a 'compact' command to perform offline
    compaction on an OSD's 'omap' directory. (BZ#1574231)
    
    * For S3 and Swift protocols, an option to list buckets/containers in
    natural (partial) order has been added. Listing containers in sorted order
    is canonical in both protocols, but is costly, and not required by some
    client applications. The performance and workload cost of S3 and Swift
    bucket/container listings is reduced for sharded buckets/containers when
    the `allow_unordered` extension is used. (BZ#1595374)
    
    * An asynchronous mechanism for executing the Ceph Object Gateway garbage
    collection using the `librados` APIs has been introduced. The original
    garbage collection mechanism serialized all processing, and lagged behind
    applications in specific workloads. Garbage collection performance has been
    significantly improved, and can be tuned to specific site requirements.
    (BZ#1595383)
    
    Bug Fix(es):
    
    These updated ceph packages include numerous bug fixes. Space precludes
    documenting all of these changes in this advisory. Users are directed to
    the Red Hat Ceph Storage 2.5 Release Notes for information on the most
    significant bug fixes for this release:
    
    https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2.5/html
    /release_notes/bug_fixes
    
    4. Solution:
    
    For details on how to apply this update, which includes the changes
    described in this advisory, refer to:
    
    https://access.redhat.com/articles/11258
    
    5. Bugs fixed (https://bugzilla.redhat.com/):
    
    1448084 - [rbd-mirror] : manually removing a replicated image might result in an empty image
    1515341 - [rbd] rbd du on empty pool does not return proper output
    1522881 - RGW Memory Leak
    1548071 - [CEE/SD][ceph-ansible][RHCS2]rolling-update.yml does not set noout,noscrub and nodeep-scrub flags
    1554963 - Pool create cmd's expected_num_objects is not properly interpreted
    1563825 - Ubuntu build errors because of "no such option: --use-wheel" in pip
    1568897 - [RFE][CEE/SD] change "osd_max_markdown_count" dout level from (10) -> (0) for RHCS 2.y version
    1574231 - RHCS 2.5.1 - Add compact operation to ceph-osdomap-tool
    1575866 - CVE-2018-1128 ceph: cephx protocol is vulnerable to replay attack
    1576057 - CVE-2018-1129 ceph: cephx uses weak signatures
    1581579 - After latest environment update all This email address is being protected from spambots. You need JavaScript enabled to view it. are in failed state
    1584218 - RGW listing versioned bucket objects always give 1 extra entry in the list in every 1000 entries
    1584763 - Some versioned objects don't sync when uploaded with 's3cmd sync'
    1584829 - objects in cache never refresh after rgw_cache_expiry_interval
    1593308 - CVE-2018-10861 ceph: ceph-mon does not perform authorization on OSD pool ops
    1595374 - [RFE] rgw: implement partial order bucket/container listing (perf)
    1595383 - [RFE] implement parallel async/mt garbage collection
    1595386 - RGW spinning at 100% CPU with no op traffic
    1599507 - [Continuous OSD memory usage growth in a HEALTH_OK cluster] RGW workload makes OSD memory explode
    
    6. Package List:
    
    Red Hat Ceph Storage 2.5 Client-Tools:
    
    Source:
    ceph-10.2.10-28.el7cp.src.rpm
    ceph-ansible-3.0.39-1.el7cp.src.rpm
    
    noarch:
    ceph-ansible-3.0.39-1.el7cp.noarch.rpm
    
    x86_64:
    ceph-base-10.2.10-28.el7cp.x86_64.rpm
    ceph-common-10.2.10-28.el7cp.x86_64.rpm
    ceph-debuginfo-10.2.10-28.el7cp.x86_64.rpm
    ceph-fuse-10.2.10-28.el7cp.x86_64.rpm
    ceph-mds-10.2.10-28.el7cp.x86_64.rpm
    ceph-radosgw-10.2.10-28.el7cp.x86_64.rpm
    ceph-selinux-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-devel-10.2.10-28.el7cp.x86_64.rpm
    librados2-10.2.10-28.el7cp.x86_64.rpm
    librados2-devel-10.2.10-28.el7cp.x86_64.rpm
    librbd1-10.2.10-28.el7cp.x86_64.rpm
    librbd1-devel-10.2.10-28.el7cp.x86_64.rpm
    librgw2-10.2.10-28.el7cp.x86_64.rpm
    librgw2-devel-10.2.10-28.el7cp.x86_64.rpm
    python-cephfs-10.2.10-28.el7cp.x86_64.rpm
    python-rados-10.2.10-28.el7cp.x86_64.rpm
    python-rbd-10.2.10-28.el7cp.x86_64.rpm
    rbd-mirror-10.2.10-28.el7cp.x86_64.rpm
    
    Red Hat Ceph Storage 2.5 ComputeNode-Tools:
    
    Source:
    ceph-10.2.10-28.el7cp.src.rpm
    ceph-ansible-3.0.39-1.el7cp.src.rpm
    
    noarch:
    ceph-ansible-3.0.39-1.el7cp.noarch.rpm
    
    x86_64:
    ceph-base-10.2.10-28.el7cp.x86_64.rpm
    ceph-common-10.2.10-28.el7cp.x86_64.rpm
    ceph-debuginfo-10.2.10-28.el7cp.x86_64.rpm
    ceph-fuse-10.2.10-28.el7cp.x86_64.rpm
    ceph-mds-10.2.10-28.el7cp.x86_64.rpm
    ceph-radosgw-10.2.10-28.el7cp.x86_64.rpm
    ceph-selinux-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-devel-10.2.10-28.el7cp.x86_64.rpm
    librados2-10.2.10-28.el7cp.x86_64.rpm
    librados2-devel-10.2.10-28.el7cp.x86_64.rpm
    librbd1-10.2.10-28.el7cp.x86_64.rpm
    librbd1-devel-10.2.10-28.el7cp.x86_64.rpm
    librgw2-10.2.10-28.el7cp.x86_64.rpm
    librgw2-devel-10.2.10-28.el7cp.x86_64.rpm
    python-cephfs-10.2.10-28.el7cp.x86_64.rpm
    python-rados-10.2.10-28.el7cp.x86_64.rpm
    python-rbd-10.2.10-28.el7cp.x86_64.rpm
    rbd-mirror-10.2.10-28.el7cp.x86_64.rpm
    
    Red Hat Ceph Storage 2.5 MON:
    
    Source:
    ceph-10.2.10-28.el7cp.src.rpm
    
    x86_64:
    ceph-base-10.2.10-28.el7cp.x86_64.rpm
    ceph-common-10.2.10-28.el7cp.x86_64.rpm
    ceph-debuginfo-10.2.10-28.el7cp.x86_64.rpm
    ceph-mon-10.2.10-28.el7cp.x86_64.rpm
    ceph-selinux-10.2.10-28.el7cp.x86_64.rpm
    ceph-test-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-devel-10.2.10-28.el7cp.x86_64.rpm
    librados2-10.2.10-28.el7cp.x86_64.rpm
    librados2-devel-10.2.10-28.el7cp.x86_64.rpm
    librbd1-10.2.10-28.el7cp.x86_64.rpm
    librbd1-devel-10.2.10-28.el7cp.x86_64.rpm
    librgw2-10.2.10-28.el7cp.x86_64.rpm
    librgw2-devel-10.2.10-28.el7cp.x86_64.rpm
    python-cephfs-10.2.10-28.el7cp.x86_64.rpm
    python-rados-10.2.10-28.el7cp.x86_64.rpm
    python-rbd-10.2.10-28.el7cp.x86_64.rpm
    
    Red Hat Ceph Storage 2.5 OSD:
    
    Source:
    ceph-10.2.10-28.el7cp.src.rpm
    
    x86_64:
    ceph-base-10.2.10-28.el7cp.x86_64.rpm
    ceph-common-10.2.10-28.el7cp.x86_64.rpm
    ceph-debuginfo-10.2.10-28.el7cp.x86_64.rpm
    ceph-osd-10.2.10-28.el7cp.x86_64.rpm
    ceph-selinux-10.2.10-28.el7cp.x86_64.rpm
    ceph-test-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-devel-10.2.10-28.el7cp.x86_64.rpm
    librados2-10.2.10-28.el7cp.x86_64.rpm
    librados2-devel-10.2.10-28.el7cp.x86_64.rpm
    librbd1-10.2.10-28.el7cp.x86_64.rpm
    librbd1-devel-10.2.10-28.el7cp.x86_64.rpm
    librgw2-10.2.10-28.el7cp.x86_64.rpm
    librgw2-devel-10.2.10-28.el7cp.x86_64.rpm
    python-cephfs-10.2.10-28.el7cp.x86_64.rpm
    python-rados-10.2.10-28.el7cp.x86_64.rpm
    python-rbd-10.2.10-28.el7cp.x86_64.rpm
    
    Red Hat Ceph Storage 2.5 Server-Tools:
    
    Source:
    ceph-10.2.10-28.el7cp.src.rpm
    ceph-ansible-3.0.39-1.el7cp.src.rpm
    
    noarch:
    ceph-ansible-3.0.39-1.el7cp.noarch.rpm
    
    x86_64:
    ceph-base-10.2.10-28.el7cp.x86_64.rpm
    ceph-common-10.2.10-28.el7cp.x86_64.rpm
    ceph-debuginfo-10.2.10-28.el7cp.x86_64.rpm
    ceph-fuse-10.2.10-28.el7cp.x86_64.rpm
    ceph-mds-10.2.10-28.el7cp.x86_64.rpm
    ceph-radosgw-10.2.10-28.el7cp.x86_64.rpm
    ceph-selinux-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-devel-10.2.10-28.el7cp.x86_64.rpm
    librados2-10.2.10-28.el7cp.x86_64.rpm
    librados2-devel-10.2.10-28.el7cp.x86_64.rpm
    librbd1-10.2.10-28.el7cp.x86_64.rpm
    librbd1-devel-10.2.10-28.el7cp.x86_64.rpm
    librgw2-10.2.10-28.el7cp.x86_64.rpm
    librgw2-devel-10.2.10-28.el7cp.x86_64.rpm
    python-cephfs-10.2.10-28.el7cp.x86_64.rpm
    python-rados-10.2.10-28.el7cp.x86_64.rpm
    python-rbd-10.2.10-28.el7cp.x86_64.rpm
    rbd-mirror-10.2.10-28.el7cp.x86_64.rpm
    
    Red Hat Ceph Storage 2.5 Workstation-Tools:
    
    Source:
    ceph-10.2.10-28.el7cp.src.rpm
    ceph-ansible-3.0.39-1.el7cp.src.rpm
    
    noarch:
    ceph-ansible-3.0.39-1.el7cp.noarch.rpm
    
    x86_64:
    ceph-base-10.2.10-28.el7cp.x86_64.rpm
    ceph-common-10.2.10-28.el7cp.x86_64.rpm
    ceph-debuginfo-10.2.10-28.el7cp.x86_64.rpm
    ceph-fuse-10.2.10-28.el7cp.x86_64.rpm
    ceph-mds-10.2.10-28.el7cp.x86_64.rpm
    ceph-radosgw-10.2.10-28.el7cp.x86_64.rpm
    ceph-selinux-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-10.2.10-28.el7cp.x86_64.rpm
    libcephfs1-devel-10.2.10-28.el7cp.x86_64.rpm
    librados2-10.2.10-28.el7cp.x86_64.rpm
    librados2-devel-10.2.10-28.el7cp.x86_64.rpm
    librbd1-10.2.10-28.el7cp.x86_64.rpm
    librbd1-devel-10.2.10-28.el7cp.x86_64.rpm
    librgw2-10.2.10-28.el7cp.x86_64.rpm
    librgw2-devel-10.2.10-28.el7cp.x86_64.rpm
    python-cephfs-10.2.10-28.el7cp.x86_64.rpm
    python-rados-10.2.10-28.el7cp.x86_64.rpm
    python-rbd-10.2.10-28.el7cp.x86_64.rpm
    rbd-mirror-10.2.10-28.el7cp.x86_64.rpm
    
    These packages are GPG signed by Red Hat for security.  Our key and
    details on how to verify the signature are available from
    https://access.redhat.com/security/team/key/
    
    7. References:
    
    https://access.redhat.com/security/cve/CVE-2018-1128
    https://access.redhat.com/security/cve/CVE-2018-1129
    https://access.redhat.com/security/cve/CVE-2018-10861
    https://access.redhat.com/security/updates/classification/#moderate
    https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2.5/html/release_notes/bug_fixes
    
    8. Contact:
    
    The Red Hat security contact is . More contact
    details at https://access.redhat.com/security/team/contact/
    
    Copyright 2018 Red Hat, Inc.
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1
    
    iQIVAwUBW1ob3tzjgjWX9erEAQjdExAAhfYqnLGq7AXZBXFNyWrvpng4Ez/JZrZ4
    cMMsLYQAJl3G+TM9mQsQ+SJDL/Bug0NZ3Zcs341q4tOeykHGnvZzn4Ka0jHcTX23
    mtr6V/tcrpscpMcRfPnnKQIOQ62E2KhxW4yxIw57S75amyM3k0oQ9LxYZSGGvRS7
    vpLu3JqocwRf1OCDjLjnv7i54fhIhWZEdQVqZHOCFGairRArQJ9q0x49/jmwt3LU
    qsuT9wu0Q98bNf1UOpc0R945tE18b/fpwUie4IXGqwpqVK7wKC0PfeKDxPqBhAlv
    HZDFrogjBhwTI07vLN5oel/uhTsYUaClJ/QlJ0S+2DmNOuwQRK6Q4tC/sninE1Ne
    byLKjSAW68GVFWt8BM7Eh5lhamkm7N1UK6IfjIrX4SMSpgUMaA8b2ec72QDP+j5G
    amPdmNGvKGZspdvkkTJcn2lWi4wkcZEsbRGuOV4cnQz0pzHd+/L8R7GAKcXwrZUQ
    bOKHhfaz64G22V5wPhZeJdUWXh+2hVh55s/3XadfBCD5Z3gxRcDc1HlenCG547Rk
    Y6d0zlr8bm9RcEkbdCMMsxLnWoyYKEuq3qVHohxoKMqkCeN2WrCg7CwMGvSssGg4
    PiDXUGHoxrbVxSqgCAoWy03r6nKMLzU3c4gN7j+aSAri2ZwLe/qQeRg6N6XRSd/D
    2TBKH8KDoBU=
    =DXPE
    -----END PGP SIGNATURE-----
    
    --
    RHSA-announce mailing list
    This email address is being protected from spambots. You need JavaScript enabled to view it.
    https://www.redhat.com/mailman/listinfo/rhsa-announce
    
    You are not authorised to post comments.

    LinuxSecurity Poll

    Has your email account ever been pwned in a data breach?

    No answer selected. Please try again.
    Please select either existing option or enter your own, however not both.
    Please select minimum 0 answer(s) and maximum 2 answer(s).
    /component/communitypolls/?task=poll.vote
    12
    radio
    [{"id":"53","title":"Yes","votes":"4","type":"x","order":"1","pct":80,"resources":[]},{"id":"54","title":"No","votes":"1","type":"x","order":"2","pct":20,"resources":[]}]["#ff5b00","#4ac0f2","#b80028","#eef66c","#60bb22","#b96a9a","#62c2cc"]["rgba(255,91,0,0.7)","rgba(74,192,242,0.7)","rgba(184,0,40,0.7)","rgba(238,246,108,0.7)","rgba(96,187,34,0.7)","rgba(185,106,154,0.7)","rgba(98,194,204,0.7)"]350
    bottom200

    We use cookies to provide and improve our services. By using our site, you consent to our Cookie Policy.