TT Bigdata TT Bigdata
首页
  • 部署专题

    • 常规安装
    • 一键部署
  • 组件安装

    • 常规&高可用
  • 版本专题

    • 更新说明
  • Ambari-Env

    • 环境准备
    • 开始使用
  • 组件编译

    • 专区—Ambari
    • 专区—Bigtop-官方组件
    • 专区—Bigtop-扩展组件
  • 报错解决

    • 专区—Ambari
    • 专区—Bigtop
  • 其他技巧

    • Maven镜像加速
    • Gradle镜像加速
    • Bower镜像加速
    • 虚拟环境思路
    • R环境安装+一键安装脚本
    • Ivy配置私有镜像仓库
    • Node.js 多版本共存方案
    • Ambari Web本地启动
    • Npm镜像加速
    • PostgreSQL快速安装
    • Temurin JDK 23快速安装
  • 成神之路

    • 专区—Ambari
    • 专区—Bigtop
  • 集成案例

    • Redis集成教学
    • Dolphin集成教学
    • Doris集成教学
    • 持续整理...
  • 核心代码

    • 各组件代码
    • 通用代码模板
  • 国产化&其他系统

    • Rocky系列
    • Ubuntu系列
  • 生产调优

    • 组件调优指南
    • 1v1指导调优
  • 支持&共建

    • 蓝图愿景
    • 技术支持
    • 合作共建
登陆
GitHub (opens new window)

JaneTTR

数据酿造智慧,每一滴都是沉淀!
首页
  • 部署专题

    • 常规安装
    • 一键部署
  • 组件安装

    • 常规&高可用
  • 版本专题

    • 更新说明
  • Ambari-Env

    • 环境准备
    • 开始使用
  • 组件编译

    • 专区—Ambari
    • 专区—Bigtop-官方组件
    • 专区—Bigtop-扩展组件
  • 报错解决

    • 专区—Ambari
    • 专区—Bigtop
  • 其他技巧

    • Maven镜像加速
    • Gradle镜像加速
    • Bower镜像加速
    • 虚拟环境思路
    • R环境安装+一键安装脚本
    • Ivy配置私有镜像仓库
    • Node.js 多版本共存方案
    • Ambari Web本地启动
    • Npm镜像加速
    • PostgreSQL快速安装
    • Temurin JDK 23快速安装
  • 成神之路

    • 专区—Ambari
    • 专区—Bigtop
  • 集成案例

    • Redis集成教学
    • Dolphin集成教学
    • Doris集成教学
    • 持续整理...
  • 核心代码

    • 各组件代码
    • 通用代码模板
  • 国产化&其他系统

    • Rocky系列
    • Ubuntu系列
  • 生产调优

    • 组件调优指南
    • 1v1指导调优
  • 支持&共建

    • 蓝图愿景
    • 技术支持
    • 合作共建
登陆
GitHub (opens new window)
  • Sqoop编译

  • Ranger编译

  • Phoenix编译

  • Dolphinscheduler编译

  • Doris编译

  • Cloudbeaver编译

  • Atlas编译

  • Superset编译

  • Celeborn编译

  • Ozone编译

  • Impala编译

    • version-4.4.1

      • Impala_4.4.1 编译
      • [O] Impala 版本适配改造(一)
        • 背景与目标
        • 环境准备
        • 核心改造思路与 diff 精解
          • 1. 适配多平台与架构
          • 2. 镜像与依赖源全面加速
          • 3. 依赖类型与参数解耦
          • 4. 脚本健壮性与易维护性
        • 完整 diff 如下
      • [O] Impala 版本适配改造(二)
      • [O] Impala 版本适配改造(三)
      • [B] Impala 版本适配改造(一)
      • [B] Impala 版本适配改造(二)
      • [B] Impala 版本适配改造(三)
      • [B] Impala 版本适配改造(四)
  • Trino编译

  • Paimon编译

  • Hudi编译

  • 组件编译-Bigtop-增强
  • Impala编译
  • version-4.4.1
JaneTTR
2025-06-22
目录

[O] Impala 版本适配改造(一)

# 背景与目标

Impala 在国内大数据集成和 CI 环境下源码编译经常受限于依赖包拉取超时、脚本平台兼容性差、环境变量混乱等问题。为解决 构建慢、易挂、难维护等痛点,本次适配聚焦如下目标:

  • 彻底打通 Rocky/AlmaLinux/aarch64 等国产平台的编译链路
  • 所有依赖源统一切换国内高可用镜像,加速拉包
  • 脚本参数与依赖类型解耦,便于后续多场景迁移
  • 提升整体脚本健壮性、可读性与可维护性

提示

镜像加速与平台适配已是国产化运维刚需,提前适配能大幅降低集群上线和迭代的人力消耗。

# 环境准备

依赖组件 推荐版本 用途说明 安装指引
JDK 1.8 构建和运行环境 一键安装
Maven 3.8.x Java 依赖管理 一键安装
Gradle 5.6.x RPM 构建工具 一键安装

笔记

所有基础依赖推荐用本站一键脚本装好,环境变量和常见坑也都详细整理。 如遇 RPM/YUM 无法联网,可提前离线下好基础包再装。

# 核心改造思路与 diff 精解

# 1. 适配多平台与架构

  • 完善 OS 识别和 toolchain 映射,支持 Rocky8/9、AlmaLinux8/9、aarch64 等多平台无感切换
  • Badge 多平台适配
  • 自动探测平台类型,toolchain 依赖不再手动拼接,直接 CI 上线

# 2. 镜像与依赖源全面加速

  • Python pip 默认源统一切为清华 TUNA,解决“拉包慢/经常超时”
  • 所有 wget 下载统一替换 curl,适配最小化系统和云主机镜像
  • 支持通过变量自定义 Apache 镜像,推荐优先选用华为云等国内节点

只要是公网环境不佳、CI 频繁跑的团队都能直接收益镜像加速 。

# 3. 依赖类型与参数解耦

  • 支持 USE_APACHE_HADOOP、USE_APACHE_HIVE 等变量一键切换依赖类型
  • 配套脚本自动拼装 url、解压路径、依赖变量,无需改代码
  • CDP/Apache 组件互换也变得简单可靠,极大减少维护成本

提示

这种参数解耦写法建议推广到整个大数据生态,运维和二开都更高效。

# 4. 脚本健壮性与易维护性

  • rm/sed 等操作加参数防止意外删错,数据库目录按容器/裸机区分
  • 所有必需环境变量缺失时立即报错退出,杜绝 silent fail
  • PATH 变量提前 export,减少“命令找不到”低级问题

# 完整 diff 如下

Subject: [PATCH] optimized: use apache hive
---
Index: testdata/bin/patch_hive.sh
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/testdata/bin/patch_hive.sh b/testdata/bin/patch_hive.sh
--- a/testdata/bin/patch_hive.sh	(revision fd92dbc6bfd9be61af6d1e3f9d105ad50f158f27)
+++ b/testdata/bin/patch_hive.sh	(date 1740635402814)
@@ -61,7 +61,7 @@

 # 1. Fix HIVE-22915
 echo "Fix HIVE-22915"
-rm $HIVE_HOME/lib/guava-*jar
+rm -rf $HIVE_HOME/lib/guava-*jar
 cp $HADOOP_HOME/share/hadoop/hdfs/lib/guava-*.jar $HIVE_HOME/lib/

 # 2. Apply patches
Index: bin/bootstrap_toolchain.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/bin/bootstrap_toolchain.py b/bin/bootstrap_toolchain.py
--- a/bin/bootstrap_toolchain.py	(revision fd92dbc6bfd9be61af6d1e3f9d105ad50f158f27)
+++ b/bin/bootstrap_toolchain.py	(date 1740634267363)
@@ -55,6 +55,7 @@
 #     ./bootstrap_toolchain.py

 from __future__ import absolute_import, division, print_function
+
 import logging
 import multiprocessing.pool
 import os
@@ -65,7 +66,6 @@
 import sys
 import tempfile
 import time
-
 from collections import namedtuple
 from string import Template

@@ -74,301 +74,302 @@
 # /etc/os-release files.
 OsMapping = namedtuple('OsMapping', ['release', 'toolchain'])
 OS_MAPPING = [
-  OsMapping("rhel7", "ec2-package-centos-7"),
-  OsMapping("centos7", "ec2-package-centos-7"),
-  OsMapping("rhel8", "ec2-package-centos-8"),
-  OsMapping("centos8", "ec2-package-centos-8"),
-  OsMapping("rocky8", "ec2-package-centos-8"),
-  OsMapping("almalinux8", "ec2-package-centos-8"),
-  OsMapping("rhel9", "ec2-package-rocky-9"),
-  OsMapping("rocky9", "ec2-package-rocky-9"),
-  OsMapping("almalinux9", "ec2-package-rocky-9"),
-  OsMapping("sles12", "ec2-package-sles-12"),
-  OsMapping("sles15", "ec2-package-sles-15"),
-  OsMapping('ubuntu16', "ec2-package-ubuntu-16-04"),
-  OsMapping('ubuntu18', "ec2-package-ubuntu-18-04"),
-  OsMapping('ubuntu20', "ec2-package-ubuntu-20-04"),
-  OsMapping('ubuntu22', "ec2-package-ubuntu-22-04")
+    OsMapping("rhel7", "ec2-package-centos-7"),
+    OsMapping("centos7", "ec2-package-centos-7"),
+    OsMapping("rhel8", "ec2-package-centos-8"),
+    OsMapping("centos8", "ec2-package-centos-8"),
+    OsMapping("rocky8", "ec2-package-centos-8"),
+    OsMapping("almalinux8", "ec2-package-centos-8"),
+    OsMapping("rhel9", "ec2-package-rocky-9"),
+    OsMapping("rocky9", "ec2-package-rocky-9"),
+    OsMapping("almalinux9", "ec2-package-rocky-9"),
+    OsMapping("sles12", "ec2-package-sles-12"),
+    OsMapping("sles15", "ec2-package-sles-15"),
+    OsMapping('ubuntu16', "ec2-package-ubuntu-16-04"),
+    OsMapping('ubuntu18', "ec2-package-ubuntu-18-04"),
+    OsMapping('ubuntu20', "ec2-package-ubuntu-20-04"),
+    OsMapping('ubuntu22', "ec2-package-ubuntu-22-04")
 ]


 def get_toolchain_compiler():
-  """Return the <name>-<version> string for the compiler package to use for the
-  toolchain."""
-  # Currently we always use GCC.
-  return "gcc-{0}".format(os.environ["IMPALA_GCC_VERSION"])
+    """Return the <name>-<version> string for the compiler package to use for the
+    toolchain."""
+    # Currently we always use GCC.
+    return "gcc-{0}".format(os.environ["IMPALA_GCC_VERSION"])


 def wget_and_unpack_package(download_path, file_name, destination, wget_no_clobber):
-  if not download_path.endswith("/" + file_name):
-    raise Exception("URL {0} does not match with expected file_name {1}"
-        .format(download_path, file_name))
-  if "closer.cgi" in download_path:
-    download_path += "?action=download"
-  NUM_ATTEMPTS = 3
-  for attempt in range(1, NUM_ATTEMPTS + 1):
-    logging.info("Downloading {0} to {1}/{2} (attempt {3})".format(
-      download_path, destination, file_name, attempt))
-    # --no-clobber avoids downloading the file if a file with the name already exists
-    try:
-      cmd = ["wget", "-q", download_path,
-             "--output-document={0}/{1}".format(destination, file_name)]
-      if wget_no_clobber:
-        cmd.append("--no-clobber")
-      subprocess.check_call(cmd)
-      break
-    except subprocess.CalledProcessError as e:
-      if attempt == NUM_ATTEMPTS:
-        raise
-      logging.error("Download failed; retrying after sleep: " + str(e))
-      time.sleep(10 + random.random() * 5)  # Sleep between 10 and 15 seconds.
-  logging.info("Extracting {0}".format(file_name))
-  subprocess.check_call(["tar", "xzf", os.path.join(destination, file_name),
-                         "--directory={0}".format(destination)])
-  os.unlink(os.path.join(destination, file_name))
+    if not download_path.endswith("/" + file_name):
+        raise Exception("URL {0} does not match with expected file_name {1}"
+                        .format(download_path, file_name))
+    if "closer.cgi" in download_path:
+        download_path += "?action=download"
+    NUM_ATTEMPTS = 3
+    for attempt in range(1, NUM_ATTEMPTS + 1):
+        logging.info("Downloading {0} to {1}/{2} (attempt {3})".format(
+            download_path, destination, file_name, attempt))
+        # --no-clobber avoids downloading the file if a file with the name already exists
+        try:
+            cmd = ["wget", "-q", download_path,
+                   "--output-document={0}/{1}".format(destination, file_name)]
+            if wget_no_clobber:
+                cmd.append("--no-clobber")
+            subprocess.check_call(cmd)
+            break
+        except subprocess.CalledProcessError as e:
+            if attempt == NUM_ATTEMPTS:
+                raise
+            logging.error("Download failed; retrying after sleep: " + str(e))
+            time.sleep(10 + random.random() * 5)  # Sleep between 10 and 15 seconds.
+    logging.info("Extracting {0}".format(file_name))
+    subprocess.check_call(["tar", "xzf", os.path.join(destination, file_name),
+                           "--directory={0}".format(destination)])
+    os.unlink(os.path.join(destination, file_name))


 class DownloadUnpackTarball(object):
-  """
-  The basic unit of work for bootstrapping the toolchain is:
-   - check if a package is already present (via the needs_download() method)
-   - if it is not, download a tarball and unpack it into the appropriate directory
-     (via the download() method)
-  In this base case, everything is known: the url to download from, the archive to
-  unpack, and the destination directory.
-  """
-  def __init__(self, url, archive_name, destination_basedir, directory_name, makedir):
-    self.url = url
-    self.archive_name = archive_name
-    assert self.archive_name.endswith(".tar.gz")
-    self.archive_basename = self.archive_name.replace(".tar.gz", "")
-    self.destination_basedir = destination_basedir
-    # destination base directory must exist
-    assert os.path.isdir(self.destination_basedir)
-    self.directory_name = directory_name
-    self.makedir = makedir
+    """
+    The basic unit of work for bootstrapping the toolchain is:
+     - check if a package is already present (via the needs_download() method)
+     - if it is not, download a tarball and unpack it into the appropriate directory
+       (via the download() method)
+    In this base case, everything is known: the url to download from, the archive to
+    unpack, and the destination directory.
+    """
+
+    def __init__(self, url, archive_name, destination_basedir, directory_name, makedir):
+        self.url = url
+        self.archive_name = archive_name
+        assert self.archive_name.endswith(".tar.gz")
+        self.archive_basename = self.archive_name.replace(".tar.gz", "")
+        self.destination_basedir = destination_basedir
+        # destination base directory must exist
+        assert os.path.isdir(self.destination_basedir)
+        self.directory_name = directory_name
+        self.makedir = makedir

-  def pkg_directory(self):
-    return os.path.join(self.destination_basedir, self.directory_name)
+    def pkg_directory(self):
+        return os.path.join(self.destination_basedir, self.directory_name)

-  def needs_download(self):
-    if os.path.isdir(self.pkg_directory()): return False
-    return True
+    def needs_download(self):
+        if os.path.isdir(self.pkg_directory()): return False
+        return True

-  def download(self):
-    unpack_dir = self.pkg_directory()
-    if self.makedir:
-      # Download and unpack in a temp directory, which we'll later move into place
-      download_dir = tempfile.mkdtemp(dir=self.destination_basedir)
-    else:
-      download_dir = self.destination_basedir
-    try:
-      wget_and_unpack_package(self.url, self.archive_name, download_dir, False)
-    except:  # noqa
-      # Clean up any partially-unpacked result.
-      if os.path.isdir(unpack_dir):
-        shutil.rmtree(unpack_dir)
-      # Only delete the download directory if it is a temporary directory
-      if download_dir != self.destination_basedir and os.path.isdir(download_dir):
-        shutil.rmtree(download_dir)
-      raise
-    if self.makedir:
-      os.rename(download_dir, unpack_dir)
+    def download(self):
+        unpack_dir = self.pkg_directory()
+        if self.makedir:
+            # Download and unpack in a temp directory, which we'll later move into place
+            download_dir = tempfile.mkdtemp(dir=self.destination_basedir)
+        else:
+            download_dir = self.destination_basedir
+        try:
+            wget_and_unpack_package(self.url, self.archive_name, download_dir, False)
+        except:  # noqa
+            # Clean up any partially-unpacked result.
+            if os.path.isdir(unpack_dir):
+                shutil.rmtree(unpack_dir)
+            # Only delete the download directory if it is a temporary directory
+            if download_dir != self.destination_basedir and os.path.isdir(download_dir):
+                shutil.rmtree(download_dir)
+            raise
+        if self.makedir:
+            os.rename(download_dir, unpack_dir)


 class TemplatedDownloadUnpackTarball(DownloadUnpackTarball):
-  def __init__(self, url_tmpl, archive_name_tmpl, destination_basedir_tmpl,
-               directory_name_tmpl, makedir, template_subs):
-    url = self.__do_substitution(url_tmpl, template_subs)
-    archive_name = self.__do_substitution(archive_name_tmpl, template_subs)
-    destination_basedir = self.__do_substitution(destination_basedir_tmpl, template_subs)
-    directory_name = self.__do_substitution(directory_name_tmpl, template_subs)
-    super(TemplatedDownloadUnpackTarball, self).__init__(url, archive_name,
-        destination_basedir, directory_name, makedir)
+    def __init__(self, url_tmpl, archive_name_tmpl, destination_basedir_tmpl,
+                 directory_name_tmpl, makedir, template_subs):
+        url = self.__do_substitution(url_tmpl, template_subs)
+        archive_name = self.__do_substitution(archive_name_tmpl, template_subs)
+        destination_basedir = self.__do_substitution(destination_basedir_tmpl, template_subs)
+        directory_name = self.__do_substitution(directory_name_tmpl, template_subs)
+        super(TemplatedDownloadUnpackTarball, self).__init__(url, archive_name,
+                                                             destination_basedir, directory_name, makedir)

-  def __do_substitution(self, template, template_subs):
-    return Template(template).substitute(**template_subs)
+    def __do_substitution(self, template, template_subs):
+        return Template(template).substitute(**template_subs)


 class EnvVersionedPackage(TemplatedDownloadUnpackTarball):
-  def __init__(self, name, url_prefix_tmpl, destination_basedir, explicit_version=None,
-               archive_basename_tmpl=None, unpack_directory_tmpl=None, makedir=False,
-               template_subs_in={}, target_comp=None):
-    template_subs = template_subs_in
-    template_subs["name"] = name
-    template_subs["version"] = self.__compute_version(name, explicit_version,
-        target_comp)
-    # The common case is that X.tar.gz unpacks to X directory. archive_basename_tmpl
-    # allows overriding the value of X (which defaults to ${name}-${version}).
-    # If X.tar.gz unpacks to Y directory, then unpack_directory_tmpl allows overriding Y.
-    if archive_basename_tmpl is None:
-      archive_basename_tmpl = "${name}-${version}"
-    archive_name_tmpl = archive_basename_tmpl + ".tar.gz"
-    if unpack_directory_tmpl is None:
-      unpack_directory_tmpl = archive_basename_tmpl
-    url_tmpl = self.__compute_url(name, archive_name_tmpl, url_prefix_tmpl, target_comp)
-    super(EnvVersionedPackage, self).__init__(url_tmpl, archive_name_tmpl,
-        destination_basedir, unpack_directory_tmpl, makedir, template_subs)
+    def __init__(self, name, url_prefix_tmpl, destination_basedir, explicit_version=None,
+                 archive_basename_tmpl=None, unpack_directory_tmpl=None, makedir=False,
+                 template_subs_in={}, target_comp=None):
+        template_subs = template_subs_in
+        template_subs["name"] = name
+        template_subs["version"] = self.__compute_version(name, explicit_version,
+                                                          target_comp)
+        # The common case is that X.tar.gz unpacks to X directory. archive_basename_tmpl
+        # allows overriding the value of X (which defaults to ${name}-${version}).
+        # If X.tar.gz unpacks to Y directory, then unpack_directory_tmpl allows overriding Y.
+        if archive_basename_tmpl is None:
+            archive_basename_tmpl = "${name}-${version}"
+        archive_name_tmpl = archive_basename_tmpl + ".tar.gz"
+        if unpack_directory_tmpl is None:
+            unpack_directory_tmpl = archive_basename_tmpl
+        url_tmpl = self.__compute_url(name, archive_name_tmpl, url_prefix_tmpl, target_comp)
+        super(EnvVersionedPackage, self).__init__(url_tmpl, archive_name_tmpl,
+                                                  destination_basedir, unpack_directory_tmpl, makedir, template_subs)

-  def __compute_version(self, name, explicit_version, target_comp=None):
-    if explicit_version is not None:
-      return explicit_version
-    else:
-      # When getting the version from the environment, we need to standardize the name
-      # to match expected environment variables.
-      std_env_name = name.replace("-", "_").upper()
-      if target_comp:
-        std_env_name += '_' + target_comp.upper()
-      version_env_var = "IMPALA_{0}_VERSION".format(std_env_name)
-      env_version = os.environ.get(version_env_var)
-      if not env_version:
-        raise Exception("Could not find version for {0} in environment var {1}".format(
-          name, version_env_var))
-      return env_version
+    def __compute_version(self, name, explicit_version, target_comp=None):
+        if explicit_version is not None:
+            return explicit_version
+        else:
+            # When getting the version from the environment, we need to standardize the name
+            # to match expected environment variables.
+            std_env_name = name.replace("-", "_").upper()
+            if target_comp:
+                std_env_name += '_' + target_comp.upper()
+            version_env_var = "IMPALA_{0}_VERSION".format(std_env_name)
+            env_version = os.environ.get(version_env_var)
+            if not env_version:
+                raise Exception("Could not find version for {0} in environment var {1}".format(
+                    name, version_env_var))
+            return env_version

-  def __compute_url(self, name, archive_name_tmpl, url_prefix_tmpl, target_comp=None):
-    # The URL defined in the environment (IMPALA_*_URL) takes precedence. If that is
-    # not defined, use the standard URL (url_prefix + archive_name)
-    std_env_name = name.replace("-", "_").upper()
-    if target_comp:
-      std_env_name += '_' + target_comp.upper()
-    url_env_var = "IMPALA_{0}_URL".format(std_env_name)
-    url_tmpl = os.environ.get(url_env_var)
-    if not url_tmpl:
-      url_tmpl = os.path.join(url_prefix_tmpl, archive_name_tmpl)
-    return url_tmpl
+    def __compute_url(self, name, archive_name_tmpl, url_prefix_tmpl, target_comp=None):
+        # The URL defined in the environment (IMPALA_*_URL) takes precedence. If that is
+        # not defined, use the standard URL (url_prefix + archive_name)
+        std_env_name = name.replace("-", "_").upper()
+        if target_comp:
+            std_env_name += '_' + target_comp.upper()
+        url_env_var = "IMPALA_{0}_URL".format(std_env_name)
+        url_tmpl = os.environ.get(url_env_var)
+        if not url_tmpl:
+            url_tmpl = os.path.join(url_prefix_tmpl, archive_name_tmpl)
+        return url_tmpl


 class ToolchainPackage(EnvVersionedPackage):
-  def __init__(self, name, explicit_version=None, platform_release=None):
-    toolchain_packages_home = os.environ.get("IMPALA_TOOLCHAIN_PACKAGES_HOME")
-    if not toolchain_packages_home:
-      logging.error("Impala environment not set up correctly, make sure "
-          "$IMPALA_TOOLCHAIN_PACKAGES_HOME is set.")
-      sys.exit(1)
-    target_comp = None
-    if ":" in name:
-      parts = name.split(':')
-      name = parts[0]
-      target_comp = parts[1]
-    compiler = get_toolchain_compiler()
-    label = get_platform_release_label(release=platform_release).toolchain
-    # Most common return values for machine are x86_64 or aarch64
-    arch = platform.machine()
-    if arch not in ['aarch64', 'x86_64']:
-      raise Exception("Unsupported architecture '{}' for pre-built native-toolchain. "
-          "Fetch and build it locally by setting NATIVE_TOOLCHAIN_HOME".format(arch))
-    toolchain_build_id = os.environ["IMPALA_TOOLCHAIN_BUILD_ID_{}".format(arch.upper())]
-    toolchain_host = os.environ["IMPALA_TOOLCHAIN_HOST"]
-    template_subs = {'compiler': compiler, 'label': label, 'arch': arch,
-                     'toolchain_build_id': toolchain_build_id,
-                     'toolchain_host': toolchain_host}
-    archive_basename_tmpl = "${name}-${version}-${compiler}-${label}-${arch}"
-    url_prefix_tmpl = "https://${toolchain_host}/build/${toolchain_build_id}/" + \
-        "${name}/${version}-${compiler}/"
-    unpack_directory_tmpl = "${name}-${version}"
-    super(ToolchainPackage, self).__init__(name, url_prefix_tmpl,
-                                           toolchain_packages_home,
-                                           explicit_version=explicit_version,
-                                           archive_basename_tmpl=archive_basename_tmpl,
-                                           unpack_directory_tmpl=unpack_directory_tmpl,
-                                           template_subs_in=template_subs,
-                                           target_comp=target_comp)
+    def __init__(self, name, explicit_version=None, platform_release=None):
+        toolchain_packages_home = os.environ.get("IMPALA_TOOLCHAIN_PACKAGES_HOME")
+        if not toolchain_packages_home:
+            logging.error("Impala environment not set up correctly, make sure "
+                          "$IMPALA_TOOLCHAIN_PACKAGES_HOME is set.")
+            sys.exit(1)
+        target_comp = None
+        if ":" in name:
+            parts = name.split(':')
+            name = parts[0]
+            target_comp = parts[1]
+        compiler = get_toolchain_compiler()
+        label = get_platform_release_label(release=platform_release).toolchain
+        # Most common return values for machine are x86_64 or aarch64
+        arch = platform.machine()
+        if arch not in ['aarch64', 'x86_64']:
+            raise Exception("Unsupported architecture '{}' for pre-built native-toolchain. "
+                            "Fetch and build it locally by setting NATIVE_TOOLCHAIN_HOME".format(arch))
+        toolchain_build_id = os.environ["IMPALA_TOOLCHAIN_BUILD_ID_{}".format(arch.upper())]
+        toolchain_host = os.environ["IMPALA_TOOLCHAIN_HOST"]
+        template_subs = {'compiler': compiler, 'label': label, 'arch': arch,
+                         'toolchain_build_id': toolchain_build_id,
+                         'toolchain_host': toolchain_host}
+        archive_basename_tmpl = "${name}-${version}-${compiler}-${label}-${arch}"
+        url_prefix_tmpl = "https://${toolchain_host}/build/${toolchain_build_id}/" + \
+                          "${name}/${version}-${compiler}/"
+        unpack_directory_tmpl = "${name}-${version}"
+        super(ToolchainPackage, self).__init__(name, url_prefix_tmpl,
+                                               toolchain_packages_home,
+                                               explicit_version=explicit_version,
+                                               archive_basename_tmpl=archive_basename_tmpl,
+                                               unpack_directory_tmpl=unpack_directory_tmpl,
+                                               template_subs_in=template_subs,
+                                               target_comp=target_comp)

-  def needs_download(self):
-    # If the directory doesn't exist, we need the download
-    unpack_dir = self.pkg_directory()
-    if not os.path.isdir(unpack_dir): return True
-    version_file = os.path.join(unpack_dir, "toolchain_package_version.txt")
-    if not os.path.exists(version_file): return True
-    with open(version_file, "r") as f:
-      return f.read().strip() != self.archive_basename
+    def needs_download(self):
+        # If the directory doesn't exist, we need the download
+        unpack_dir = self.pkg_directory()
+        if not os.path.isdir(unpack_dir): return True
+        version_file = os.path.join(unpack_dir, "toolchain_package_version.txt")
+        if not os.path.exists(version_file): return True
+        with open(version_file, "r") as f:
+            return f.read().strip() != self.archive_basename

-  def download(self):
-    # Remove the existing package directory if it exists (since this has additional
-    # conditions as part of needs_download())
-    unpack_dir = self.pkg_directory()
-    if os.path.exists(unpack_dir):
-      logging.info("Removing existing package directory {0}".format(unpack_dir))
-      shutil.rmtree(unpack_dir)
-    super(ToolchainPackage, self).download()
-    # Write the toolchain_package_version.txt file
-    version_file = os.path.join(unpack_dir, "toolchain_package_version.txt")
-    with open(version_file, "w") as f:
-      f.write(self.archive_basename)
+    def download(self):
+        # Remove the existing package directory if it exists (since this has additional
+        # conditions as part of needs_download())
+        unpack_dir = self.pkg_directory()
+        if os.path.exists(unpack_dir):
+            logging.info("Removing existing package directory {0}".format(unpack_dir))
+            shutil.rmtree(unpack_dir)
+        super(ToolchainPackage, self).download()
+        # Write the toolchain_package_version.txt file
+        version_file = os.path.join(unpack_dir, "toolchain_package_version.txt")
+        with open(version_file, "w") as f:
+            f.write(self.archive_basename)


 class CdpComponent(EnvVersionedPackage):
-  def __init__(self, name, explicit_version=None, archive_basename_tmpl=None,
-               unpack_directory_tmpl=None, makedir=False):
-    # Compute the CDP base URL (based on the IMPALA_TOOLCHAIN_HOST and CDP_BUILD_NUMBER)
-    if "IMPALA_TOOLCHAIN_HOST" not in os.environ or "CDP_BUILD_NUMBER" not in os.environ:
-      logging.error("Impala environment not set up correctly, make sure "
-                    "impala-config.sh is sourced.")
-      sys.exit(1)
-    template_subs = {"toolchain_host": os.environ["IMPALA_TOOLCHAIN_HOST"],
-                     "cdp_build_number": os.environ["CDP_BUILD_NUMBER"]}
-    url_prefix_tmpl = "https://${toolchain_host}/build/cdp_components/" + \
-        "${cdp_build_number}/tarballs/"
+    def __init__(self, name, explicit_version=None, archive_basename_tmpl=None,
+                 unpack_directory_tmpl=None, makedir=False):
+        # Compute the CDP base URL (based on the IMPALA_TOOLCHAIN_HOST and CDP_BUILD_NUMBER)
+        if "IMPALA_TOOLCHAIN_HOST" not in os.environ or "CDP_BUILD_NUMBER" not in os.environ:
+            logging.error("Impala environment not set up correctly, make sure "
+                          "impala-config.sh is sourced.")
+            sys.exit(1)
+        template_subs = {"toolchain_host": os.environ["IMPALA_TOOLCHAIN_HOST"],
+                         "cdp_build_number": os.environ["CDP_BUILD_NUMBER"]}
+        url_prefix_tmpl = "https://${toolchain_host}/build/cdp_components/" + \
+                          "${cdp_build_number}/tarballs/"

-    # Get the output base directory from CDP_COMPONENTS_HOME
-    destination_basedir = os.environ["CDP_COMPONENTS_HOME"]
-    super(CdpComponent, self).__init__(name, url_prefix_tmpl, destination_basedir,
-                                       explicit_version=explicit_version,
-                                       archive_basename_tmpl=archive_basename_tmpl,
-                                       unpack_directory_tmpl=unpack_directory_tmpl,
-                                       makedir=makedir, template_subs_in=template_subs)
+        # Get the output base directory from CDP_COMPONENTS_HOME
+        destination_basedir = os.environ["CDP_COMPONENTS_HOME"]
+        super(CdpComponent, self).__init__(name, url_prefix_tmpl, destination_basedir,
+                                           explicit_version=explicit_version,
+                                           archive_basename_tmpl=archive_basename_tmpl,
+                                           unpack_directory_tmpl=unpack_directory_tmpl,
+                                           makedir=makedir, template_subs_in=template_subs)


 class ApacheComponent(EnvVersionedPackage):
-  def __init__(self, name, explicit_version=None, archive_basename_tmpl=None,
-               unpack_directory_tmpl=None, makedir=False, component_path_tmpl=None):
-    # Compute the apache base URL (based on the APACHE_MIRROR)
-    if "APACHE_COMPONENTS_HOME" not in os.environ:
-      logging.error("Impala environment not set up correctly, make sure "
-                    "impala-config.sh is sourced.")
-      sys.exit(1)
-    template_subs = {"apache_mirror": os.environ["APACHE_MIRROR"]}
-    # Different components have different sub-paths. For example, hive is hive/hive-xxx,
-    # hadoop is hadoop/common/hadoop-xxx. The default is hive format.
-    if component_path_tmpl is None:
-      component_path_tmpl = "${name}/${name}-${version}/"
-    url_prefix_tmpl = "${apache_mirror}/" + component_path_tmpl
+    def __init__(self, name, explicit_version=None, archive_basename_tmpl=None,
+                 unpack_directory_tmpl=None, makedir=False, component_path_tmpl=None):
+        # Compute the apache base URL (based on the APACHE_MIRROR)
+        if "APACHE_COMPONENTS_HOME" not in os.environ:
+            logging.error("Impala environment not set up correctly, make sure "
+                          "impala-config.sh is sourced.")
+            sys.exit(1)
+        template_subs = {"apache_mirror": os.environ["APACHE_MIRROR"]}
+        # Different components have different sub-paths. For example, hive is hive/hive-xxx,
+        # hadoop is hadoop/common/hadoop-xxx. The default is hive format.
+        if component_path_tmpl is None:
+            component_path_tmpl = "${name}/${name}-${version}/"
+        url_prefix_tmpl = "${apache_mirror}/" + component_path_tmpl

-    # Get the output base directory from APACHE_COMPONENTS_HOME
-    destination_basedir = os.environ["APACHE_COMPONENTS_HOME"]
-    super(ApacheComponent, self).__init__(name, url_prefix_tmpl, destination_basedir,
-                                       explicit_version=explicit_version,
-                                       archive_basename_tmpl=archive_basename_tmpl,
-                                       unpack_directory_tmpl=unpack_directory_tmpl,
-                                       makedir=makedir, template_subs_in=template_subs)
+        # Get the output base directory from APACHE_COMPONENTS_HOME
+        destination_basedir = os.environ["APACHE_COMPONENTS_HOME"]
+        super(ApacheComponent, self).__init__(name, url_prefix_tmpl, destination_basedir,
+                                              explicit_version=explicit_version,
+                                              archive_basename_tmpl=archive_basename_tmpl,
+                                              unpack_directory_tmpl=unpack_directory_tmpl,
+                                              makedir=makedir, template_subs_in=template_subs)


 class ToolchainKudu(ToolchainPackage):
-  def __init__(self, platform_label=None):
-    super(ToolchainKudu, self).__init__('kudu', platform_release=platform_label)
+    def __init__(self, platform_label=None):
+        super(ToolchainKudu, self).__init__('kudu', platform_release=platform_label)

-  def needs_download(self):
-    # This verifies that the unpack directory exists
-    if super(ToolchainKudu, self).needs_download():
-      return True
-    # Additional check to distinguish this from the Kudu Java package
-    # Regardless of the actual build type, the 'kudu' tarball will always contain a
-    # 'debug' and a 'release' directory.
-    if not os.path.exists(os.path.join(self.pkg_directory(), "debug")):
-      return True
-    # Both the pkg_directory and the debug directory exist
-    return False
+    def needs_download(self):
+        # This verifies that the unpack directory exists
+        if super(ToolchainKudu, self).needs_download():
+            return True
+        # Additional check to distinguish this from the Kudu Java package
+        # Regardless of the actual build type, the 'kudu' tarball will always contain a
+        # 'debug' and a 'release' directory.
+        if not os.path.exists(os.path.join(self.pkg_directory(), "debug")):
+            return True
+        # Both the pkg_directory and the debug directory exist
+        return False


 def try_get_platform_release_label():
-  """Gets the right package label from the OS version. Returns an OsMapping with both
-     'toolchain' and 'cdh' labels. Return None if not found.
-  """
-  try:
-    return get_platform_release_label()
-  except Exception:
-    return None
+    """Gets the right package label from the OS version. Returns an OsMapping with both
+       'toolchain' and 'cdh' labels. Return None if not found.
+    """
+    try:
+        return get_platform_release_label()
+    except Exception:
+        return None


 # Cache the /etc/os-release calculation to shave a little bit of time.
@@ -376,216 +377,222 @@


 def get_platform_release_label(release=None):
-  """Gets the right package label from the OS version. Raise exception if not found.
-     'release' can be provided to override the underlying OS version. This uses
-     ID and VERSION_ID from /etc/os-release to identify a distribution. Specifically,
-     this returns the concatenation of the ID and the major version component
-     of VERSION_ID. i.e. ID=ubuntu VERSION_ID=16.04 => ubuntu16
-  """
-  global os_release_cache
-  if not release:
-    if os_release_cache:
-      release = os_release_cache
-    else:
-      os_id = None
-      os_major_version = None
-      with open("/etc/os-release") as f:
-        for line in f:
-          # We assume that ID and VERSION_ID are present and don't contain '=' inside
-          # the actual value. This is true for all distributions we currently support.
-          if line.startswith("ID="):
-            os_id = line.split("=")[1].strip().strip('"')
-          elif line.startswith("VERSION_ID="):
-            os_version_id = line.split("=")[1].strip().strip('"')
-            # Some distributions have a major version that doesn't change (e.g. 3.12.0
-            # and 3.12.0). The distributions that we support don't do this. This
-            # calculation would need to change for that circumstance.
-            os_major_version = os_version_id.split(".")[0]
+    """Gets the right package label from the OS version. Raise exception if not found.
+       'release' can be provided to override the underlying OS version. This uses
+       ID and VERSION_ID from /etc/os-release to identify a distribution. Specifically,
+       this returns the concatenation of the ID and the major version component
+       of VERSION_ID. i.e. ID=ubuntu VERSION_ID=16.04 => ubuntu16
+    """
+    global os_release_cache
+    if not release:
+        if os_release_cache:
+            release = os_release_cache
+        else:
+            os_id = None
+            os_major_version = None
+            with open("/etc/os-release") as f:
+                for line in f:
+                    # We assume that ID and VERSION_ID are present and don't contain '=' inside
+                    # the actual value. This is true for all distributions we currently support.
+                    if line.startswith("ID="):
+                        os_id = line.split("=")[1].strip().strip('"')
+                    elif line.startswith("VERSION_ID="):
+                        os_version_id = line.split("=")[1].strip().strip('"')
+                        # Some distributions have a major version that doesn't change (e.g. 3.12.0
+                        # and 3.12.0). The distributions that we support don't do this. This
+                        # calculation would need to change for that circumstance.
+                        os_major_version = os_version_id.split(".")[0]

-      if os_id is None or os_major_version is None:
-        raise Exception("Error parsing /etc/os-release: "
-            "os_id={0} os_major_version={1}".format(os_id, os_major_version))
+            if os_id is None or os_major_version is None:
+                raise Exception("Error parsing /etc/os-release: "
+                                "os_id={0} os_major_version={1}".format(os_id, os_major_version))

-      release = "{0}{1}".format(os_id, os_major_version)
-      os_release_cache = release
-  for mapping in OS_MAPPING:
-    if mapping.release == release:
-      return mapping
-  raise Exception("Could not find package label for OS version: {0}.".format(release))
+            release = "{0}{1}".format(os_id, os_major_version)
+            os_release_cache = release
+    for mapping in OS_MAPPING:
+        if mapping.release == release:
+            return mapping
+    raise Exception("Could not find package label for OS version: {0}.".format(release))


 def check_custom_toolchain(toolchain_packages_home, packages):
-  missing = []
-  for p in packages:
-    if not os.path.isdir(p.pkg_directory()):
-      missing.append((p, p.pkg_directory()))
+    missing = []
+    for p in packages:
+        if not os.path.isdir(p.pkg_directory()):
+            missing.append((p, p.pkg_directory()))

-  if missing:
-    msg = "The following packages are not in their expected locations.\n"
-    for p, pkg_dir in missing:
-      msg += "  %s (expected directory %s to exist)\n" % (p, pkg_dir)
-    msg += "Pre-built toolchain archives not available for your platform.\n"
-    msg += "Clone and build native toolchain from source using this repository:\n"
-    msg += "    https://github.com/cloudera/native-toolchain\n"
-    logging.error(msg)
-    raise Exception("Toolchain bootstrap failed: required packages were missing")
+    if missing:
+        msg = "The following packages are not in their expected locations.\n"
+        for p, pkg_dir in missing:
+            msg += "  %s (expected directory %s to exist)\n" % (p, pkg_dir)
+        msg += "Pre-built toolchain archives not available for your platform.\n"
+        msg += "Clone and build native toolchain from source using this repository:\n"
+        msg += "    https://github.com/cloudera/native-toolchain\n"
+        logging.error(msg)
+        raise Exception("Toolchain bootstrap failed: required packages were missing")


 def execute_many(f, args):
-  """
-  Executes f(a) for a in args using a threadpool to execute in parallel.
-  The pool uses the smaller of 4 and the number of CPUs in the system
-  as the pool size.
-  """
-  pool = multiprocessing.pool.ThreadPool(processes=min(multiprocessing.cpu_count(), 4))
-  return pool.map(f, args, 1)
+    """
+    Executes f(a) for a in args using a threadpool to execute in parallel.
+    The pool uses the smaller of 4 and the number of CPUs in the system
+    as the pool size.
+    """
+    pool = multiprocessing.pool.ThreadPool(processes=min(multiprocessing.cpu_count(), 4))
+    return pool.map(f, args, 1)


 def create_directory_from_env_var(env_var):
-  dir_name = os.environ.get(env_var)
-  if not dir_name:
-    logging.error("Impala environment not set up correctly, make sure "
-        "{0} is set.".format(env_var))
-    sys.exit(1)
-  if not os.path.exists(dir_name):
-    os.makedirs(dir_name)
+    dir_name = os.environ.get(env_var)
+    if not dir_name:
+        logging.error("Impala environment not set up correctly, make sure "
+                      "{0} is set.".format(env_var))
+        sys.exit(1)
+    if not os.path.exists(dir_name):
+        os.makedirs(dir_name)


 def get_unique_toolchain_downloads(packages):
-  toolchain_packages = [ToolchainPackage(p) for p in packages]
-  unique_pkg_directories = set()
-  unique_packages = []
-  for p in toolchain_packages:
-    if p.pkg_directory() not in unique_pkg_directories:
-      unique_packages.append(p)
-      unique_pkg_directories.add(p.pkg_directory())
-  return unique_packages
+    toolchain_packages = [ToolchainPackage(p) for p in packages]
+    unique_pkg_directories = set()
+    unique_packages = []
+    for p in toolchain_packages:
+        if p.pkg_directory() not in unique_pkg_directories:
+            unique_packages.append(p)
+            unique_pkg_directories.add(p.pkg_directory())
+    return unique_packages


 def get_toolchain_downloads():
-  toolchain_packages = []
-  # The LLVM and GCC packages are the largest packages in the toolchain (Kudu is handled
-  # separately). Sort them first so their downloads start as soon as possible.
-  llvm_package = ToolchainPackage("llvm")
-  llvm_package_asserts = ToolchainPackage(
-      "llvm", explicit_version=os.environ.get("IMPALA_LLVM_DEBUG_VERSION"))
-  gcc_package = ToolchainPackage("gcc")
-  toolchain_packages += [llvm_package, llvm_package_asserts, gcc_package]
-  toolchain_packages += [ToolchainPackage(p) for p in
-      ["avro", "binutils", "boost", "breakpad", "bzip2", "calloncehack", "cctz",
-       "cloudflarezlib", "cmake", "crcutil", "curl", "flatbuffers", "gdb", "gflags",
-       "glog", "gperftools", "gtest", "jwt-cpp", "libev", "libunwind", "lz4", "mold",
-       "openldap", "orc", "protobuf", "python", "rapidjson", "re2", "snappy", "tpc-h",
-       "tpc-ds", "zlib", "zstd"]]
-  python3_package = ToolchainPackage(
-      "python", explicit_version=os.environ.get("IMPALA_PYTHON3_VERSION"))
-  toolchain_packages += [python3_package]
-  toolchain_packages += get_unique_toolchain_downloads(
-      ["thrift:cpp", "thrift:java", "thrift:py"])
-  protobuf_package_clang = ToolchainPackage(
-      "protobuf", explicit_version=os.environ.get("IMPALA_PROTOBUF_CLANG_VERSION"))
-  toolchain_packages += [protobuf_package_clang]
-  if platform.machine() == 'aarch64':
-    toolchain_packages.append(ToolchainPackage("hadoop-client"))
-  # Check whether this platform is supported (or whether a valid custom toolchain
-  # has been provided).
-  if not try_get_platform_release_label() \
-     or not try_get_platform_release_label().toolchain:
-    toolchain_packages_home = os.environ.get("IMPALA_TOOLCHAIN_PACKAGES_HOME")
-    # This would throw an exception if the custom toolchain were not valid
-    check_custom_toolchain(toolchain_packages_home, toolchain_packages)
-    # Nothing to download
-    return []
-  return toolchain_packages
+    toolchain_packages = []
+    # The LLVM and GCC packages are the largest packages in the toolchain (Kudu is handled
+    # separately). Sort them first so their downloads start as soon as possible.
+    llvm_package = ToolchainPackage("llvm")
+    llvm_package_asserts = ToolchainPackage(
+        "llvm", explicit_version=os.environ.get("IMPALA_LLVM_DEBUG_VERSION"))
+    gcc_package = ToolchainPackage("gcc")
+    toolchain_packages += [llvm_package, llvm_package_asserts, gcc_package]
+    toolchain_packages += [ToolchainPackage(p) for p in
+                           ["avro", "binutils", "boost", "breakpad", "bzip2", "calloncehack", "cctz",
+                            "cloudflarezlib", "cmake", "crcutil", "curl", "flatbuffers", "gdb", "gflags",
+                            "glog", "gperftools", "gtest", "jwt-cpp", "libev", "libunwind", "lz4", "mold",
+                            "openldap", "orc", "protobuf", "python", "rapidjson", "re2", "snappy", "tpc-h",
+                            "tpc-ds", "zlib", "zstd"]]
+    python3_package = ToolchainPackage(
+        "python", explicit_version=os.environ.get("IMPALA_PYTHON3_VERSION"))
+    toolchain_packages += [python3_package]
+    toolchain_packages += get_unique_toolchain_downloads(
+        ["thrift:cpp", "thrift:java", "thrift:py"])
+    protobuf_package_clang = ToolchainPackage(
+        "protobuf", explicit_version=os.environ.get("IMPALA_PROTOBUF_CLANG_VERSION"))
+    toolchain_packages += [protobuf_package_clang]
+    if platform.machine() == 'aarch64':
+        toolchain_packages.append(ToolchainPackage("hadoop-client"))
+    # Check whether this platform is supported (or whether a valid custom toolchain
+    # has been provided).
+    if not try_get_platform_release_label() \
+            or not try_get_platform_release_label().toolchain:
+        toolchain_packages_home = os.environ.get("IMPALA_TOOLCHAIN_PACKAGES_HOME")
+        # This would throw an exception if the custom toolchain were not valid
+        check_custom_toolchain(toolchain_packages_home, toolchain_packages)
+        # Nothing to download
+        return []
+    return toolchain_packages


 def get_hadoop_downloads():
-  cluster_components = []
-  hadoop = CdpComponent("hadoop")
-  hbase = CdpComponent("hbase", archive_basename_tmpl="hbase-${version}-bin",
-                       unpack_directory_tmpl="hbase-${version}")
+    cluster_components = []
+    hbase = CdpComponent("hbase", archive_basename_tmpl="hbase-${version}-bin",
+                         unpack_directory_tmpl="hbase-${version}")

-  use_apache_ozone = os.environ["USE_APACHE_OZONE"] == "true"
-  if use_apache_ozone:
-    ozone = ApacheComponent("ozone", component_path_tmpl="ozone/${version}")
-  else:
-    ozone = CdpComponent("ozone")
+    use_apache_ozone = os.environ["USE_APACHE_OZONE"] == "true"
+    if use_apache_ozone:
+        ozone = ApacheComponent("ozone", component_path_tmpl="ozone/${version}")
+    else:
+        ozone = CdpComponent("ozone")

-  use_apache_hive = os.environ["USE_APACHE_HIVE"] == "true"
-  if use_apache_hive:
-    hive = ApacheComponent("hive", archive_basename_tmpl="apache-hive-${version}-bin")
-    hive_src = ApacheComponent("hive", archive_basename_tmpl="apache-hive-${version}-src")
-  else:
-    hive = CdpComponent("hive", archive_basename_tmpl="apache-hive-${version}-bin")
-    hive_src = CdpComponent("hive-source",
-                            explicit_version=os.environ.get("IMPALA_HIVE_VERSION"),
-                            archive_basename_tmpl="hive-${version}-source",
-                            unpack_directory_tmpl="hive-${version}")
+    use_apache_hive = os.environ["USE_APACHE_HIVE"] == "true"
+    if use_apache_hive:
+        hive = ApacheComponent("hive", archive_basename_tmpl="apache-hive-${version}-bin")
+        hive_src = ApacheComponent("hive", archive_basename_tmpl="apache-hive-${version}-src")
+    else:
+        hive = CdpComponent("hive", archive_basename_tmpl="apache-hive-${version}-bin")
+        hive_src = CdpComponent("hive-source",
+                                explicit_version=os.environ.get("IMPALA_HIVE_VERSION"),
+                                archive_basename_tmpl="hive-${version}-source",
+                                unpack_directory_tmpl="hive-${version}")

-  tez = CdpComponent("tez", archive_basename_tmpl="tez-${version}-minimal", makedir=True)
-  ranger = CdpComponent("ranger", archive_basename_tmpl="ranger-${version}-admin")
-  use_override_hive = \
-      "HIVE_VERSION_OVERRIDE" in os.environ and os.environ["HIVE_VERSION_OVERRIDE"] != ""
-  # If we are using a locally built Hive we do not have a need to pull hive as a
-  # dependency
-  cluster_components.extend([hadoop, hbase, ozone])
-  if not use_override_hive:
-    cluster_components.extend([hive, hive_src])
-  cluster_components.extend([tez, ranger])
-  return cluster_components
+    # Use Apache Hadoop if the environment variable is set to true
+    use_apache_hadoop = os.environ["USE_APACHE_HADOOP"] == "true"  # Check for USE_APACHE_HADOOP environment variable
+    if use_apache_hadoop:
+        hadoop = ApacheComponent("hadoop", archive_basename_tmpl="hadoop-${version}")
+    else:
+        hadoop = CdpComponent("hadoop")
+
+    tez = CdpComponent("tez", archive_basename_tmpl="tez-${version}-minimal", makedir=True)
+    ranger = CdpComponent("ranger", archive_basename_tmpl="ranger-${version}-admin")
+    use_override_hive = \
+        "HIVE_VERSION_OVERRIDE" in os.environ and os.environ["HIVE_VERSION_OVERRIDE"] != ""
+    # If we are using a locally built Hive we do not have a need to pull hive as a
+    # dependency
+    cluster_components.extend([hadoop, hbase, ozone])
+    if not use_override_hive:
+        cluster_components.extend([hive, hive_src])
+    cluster_components.extend([tez, ranger])
+    return cluster_components


 def get_kudu_downloads():
-  # Toolchain Kudu includes Java artifacts.
-  return [ToolchainKudu()]
+    # Toolchain Kudu includes Java artifacts.
+    return [ToolchainKudu()]


 def main():
-  """
-  Validates that bin/impala-config.sh has been sourced by verifying that $IMPALA_HOME
-  and $IMPALA_TOOLCHAIN_PACKAGES_HOME are in the environment. We assume that if these
-  are set, then IMPALA_<PACKAGE>_VERSION environment variables are also set. This will
-  create the directory specified by $IMPALA_TOOLCHAIN_PACKAGES_HOME if it does not
-  already exist. Then, it will compute what packages need to be downloaded. Packages are
-  only downloaded if they are not already present. There are two main categories of
-  packages. Toolchain packages are native packages built using the native toolchain.
-  These are always downloaded. Hadoop component packages are the CDP builds of Hadoop
-  components such as Hadoop, Hive, HBase, etc. Hadoop component packages are organized as
-  a consistent set of compatible version via a build number (i.e. CDP_BUILD_NUMBER).
-  Hadoop component packages are only downloaded if $DOWNLOAD_CDH_COMPONENTS is true. The
-  versions used for Hadoop components come from the CDP versions based on the
-  $CDP_BUILD_NUMBER. CDP Hadoop packages are downloaded into $CDP_COMPONENTS_HOME.
-  """
-  logging.basicConfig(level=logging.INFO,
-      format='%(asctime)s %(threadName)s %(levelname)s: %(message)s')
-  # 'sh' module logs at every execution, which is too noisy
-  logging.getLogger("sh").setLevel(logging.WARNING)
+    """
+    Validates that bin/impala-config.sh has been sourced by verifying that $IMPALA_HOME
+    and $IMPALA_TOOLCHAIN_PACKAGES_HOME are in the environment. We assume that if these
+    are set, then IMPALA_<PACKAGE>_VERSION environment variables are also set. This will
+    create the directory specified by $IMPALA_TOOLCHAIN_PACKAGES_HOME if it does not
+    already exist. Then, it will compute what packages need to be downloaded. Packages are
+    only downloaded if they are not already present. There are two main categories of
+    packages. Toolchain packages are native packages built using the native toolchain.
+    These are always downloaded. Hadoop component packages are the CDP builds of Hadoop
+    components such as Hadoop, Hive, HBase, etc. Hadoop component packages are organized as
+    a consistent set of compatible version via a build number (i.e. CDP_BUILD_NUMBER).
+    Hadoop component packages are only downloaded if $DOWNLOAD_CDH_COMPONENTS is true. The
+    versions used for Hadoop components come from the CDP versions based on the
+    $CDP_BUILD_NUMBER. CDP Hadoop packages are downloaded into $CDP_COMPONENTS_HOME.
+    """
+    logging.basicConfig(level=logging.INFO,
+                        format='%(asctime)s %(threadName)s %(levelname)s: %(message)s')
+    # 'sh' module logs at every execution, which is too noisy
+    logging.getLogger("sh").setLevel(logging.WARNING)

-  if not os.environ.get("IMPALA_HOME"):
-    logging.error("Impala environment not set up correctly, make sure "
-          "impala-config.sh is sourced.")
-    sys.exit(1)
+    if not os.environ.get("IMPALA_HOME"):
+        logging.error("Impala environment not set up correctly, make sure "
+                      "impala-config.sh is sourced.")
+        sys.exit(1)

-  # Create the toolchain directory if necessary
-  create_directory_from_env_var("IMPALA_TOOLCHAIN_PACKAGES_HOME")
+    # Create the toolchain directory if necessary
+    create_directory_from_env_var("IMPALA_TOOLCHAIN_PACKAGES_HOME")

-  downloads = []
-  if os.getenv("SKIP_TOOLCHAIN_BOOTSTRAP", "false") != "true":
-    downloads += get_toolchain_downloads()
-  if os.getenv("DOWNLOAD_CDH_COMPONENTS", "false") == "true":
-    create_directory_from_env_var("CDP_COMPONENTS_HOME")
-    create_directory_from_env_var("APACHE_COMPONENTS_HOME")
-    if os.getenv("SKIP_TOOLCHAIN_BOOTSTRAP", "false") != "true":
-      # Kudu is currently sourced from native-toolchain
-      downloads += get_kudu_downloads()
-    downloads += get_hadoop_downloads()
+    downloads = []
+    if os.getenv("SKIP_TOOLCHAIN_BOOTSTRAP", "false") != "true":
+        downloads += get_toolchain_downloads()
+    if os.getenv("DOWNLOAD_CDH_COMPONENTS", "false") == "true":
+        create_directory_from_env_var("CDP_COMPONENTS_HOME")
+        create_directory_from_env_var("APACHE_COMPONENTS_HOME")
+        if os.getenv("SKIP_TOOLCHAIN_BOOTSTRAP", "false") != "true":
+            # Kudu is currently sourced from native-toolchain
+            downloads += get_kudu_downloads()
+        downloads += get_hadoop_downloads()

-  components_needing_download = [d for d in downloads if d.needs_download()]
+    components_needing_download = [d for d in downloads if d.needs_download()]

-  def download(component):
-    component.download()
+    def download(component):
+        component.download()

-  execute_many(download, components_needing_download)
+    execute_many(download, components_needing_download)


 if __name__ == "__main__": main()
Index: bin/bootstrap_system.sh
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/bin/bootstrap_system.sh b/bin/bootstrap_system.sh
--- a/bin/bootstrap_system.sh	(revision fd92dbc6bfd9be61af6d1e3f9d105ad50f158f27)
+++ b/bin/bootstrap_system.sh	(date 1739953975955)
@@ -46,7 +46,7 @@

 : ${IMPALA_HOME:=$(cd "$(dirname $0)"/..; pwd)}
 export IMPALA_HOME
-
+export PATH="/usr/bin:$PATH"
 if [[ -t 1 ]] # if on an interactive terminal
 then
   echo "This script will clobber some system settings. Are you sure you want to"
@@ -378,7 +378,7 @@

 redhat notindocker sudo service postgresql initdb
 redhat notindocker sudo service postgresql stop
-redhat indocker sudo -u postgres PGDATA=/var/lib/pgsql/data pg_ctl init
+redhat indocker sudo -u postgres PGDATA=/var/lib/pgsql/data_impala pg_ctl init
 ubuntu sudo service postgresql stop

 # These configurations expose connectiong to PostgreSQL via md5-hashed
@@ -392,20 +392,20 @@
 ubuntu sudo sed -ri 's/host +all +all +127.0.0.1\/32/host all all samenet/g' \
   /etc/postgresql/*/main/pg_hba.conf
 redhat sudo sed -ri 's/local +all +all +(ident|peer)/local all all trust/g' \
-  /var/lib/pgsql/data/pg_hba.conf
+  /var/lib/pgsql/data_impala/pg_hba.conf
 # Accept md5 passwords from localhost
-redhat sudo sed -i -e 's,\(host.*\)ident,\1md5,' /var/lib/pgsql/data/pg_hba.conf
+redhat sudo sed -i -e 's,\(host.*\)ident,\1md5,' /var/lib/pgsql/data_impala/pg_hba.conf
 # Accept remote connections from the hosts in the same subnet.
 redhat sudo sed -ri "s/#listen_addresses = 'localhost'/listen_addresses = '0.0.0.0'/g" \
-  /var/lib/pgsql/data/postgresql.conf
+  /var/lib/pgsql/data_impala/postgresql.conf
 redhat sudo sed -ri 's/host +all +all +127.0.0.1\/32/host all all samenet/g' \
-  /var/lib/pgsql/data/pg_hba.conf
+  /var/lib/pgsql/data_impala/pg_hba.conf

 ubuntu sudo service postgresql start
 redhat notindocker sudo service postgresql start
 # Important to redirect pg_ctl to a logfile, lest it keep the stdout
 # file descriptor open, preventing the shell from exiting.
-redhat indocker sudo -u postgres PGDATA=/var/lib/pgsql/data bash -c \
+redhat indocker sudo -u postgres PGDATA=/var/lib/pgsql/data_impala bash -c \
   "pg_ctl start -w --timeout=120 >> /var/lib/pgsql/pg.log 2>&1"

 # Set up postgres for HMS
@@ -500,9 +500,9 @@
 # Try to prepopulate the m2 directory to save time
 if [[ "${PREPOPULATE_M2_REPOSITORY:-true}" == true ]] ; then
   echo ">>> Populating m2 directory..."
-  if ! bin/jenkins/populate_m2_directory.py ; then
-    echo "Failed to prepopulate the m2 directory. Continuing..."
-  fi
+#  if ! bin/jenkins/populate_m2_directory.py ; then
+#    echo "Failed to prepopulate the m2 directory. Continuing..."
+#  fi
 else
   echo ">>> Skip populating m2 directory"
 fi
Index: bin/impala-config.sh
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/bin/impala-config.sh b/bin/impala-config.sh
--- a/bin/impala-config.sh	(revision fd92dbc6bfd9be61af6d1e3f9d105ad50f158f27)
+++ b/bin/impala-config.sh	(date 1740664460277)
@@ -80,6 +80,12 @@
 # This is added temporarily to help transitioning from Avro C to C++ library.
 export USE_AVRO_CPP=${USE_AVRO_CPP:=false}

+# 设置是否使用 Apache 版本的组件
+export USE_APACHE_HADOOP=${USE_APACHE_HADOOP-false}
+export USE_APACHE_HBASE=${USE_APACHE_HBASE-false}
+export USE_APACHE_RANGER=${USE_APACHE_RANGER-false}
+export USE_APACHE_TEZ=${USE_APACHE_TEZ-false}
+
 # The unique build id of the toolchain to use if bootstrapping. This is generated by the
 # native-toolchain build when publishing its build artifacts. This should be changed when
 # moving to a different build of the toolchain, e.g. when a version is bumped or a
@@ -270,12 +276,22 @@
 export CDP_TEZ_VERSION=0.9.1.7.2.18.0-369

 # Ref: https://infra.apache.org/release-download-pages.html#closer
-: ${APACHE_MIRROR:="https://www.apache.org/dyn/closer.cgi"}
+: ${APACHE_MIRROR:="https://mirrors.huaweicloud.com/apache"}
 export APACHE_MIRROR
 export APACHE_HIVE_VERSION=3.1.3
 export APACHE_HIVE_STORAGE_API_VERSION=2.7.0
 export APACHE_OZONE_VERSION=1.3.0

+export APACHE_HADOOP_VERSION=3.3.4
+export APACHE_HBASE_VERSION=2.4.13
+export APACHE_RANGER_VERSION=2.4.0
+export APACHE_TEZ_VERSION=0.10.1
+
+
+
+
+
+
 # Java dependencies that are not also runtime components. Declaring versions here allows
 # other branches to override them in impala-config-branch.sh for cleaner patches.
 export IMPALA_BOUNCY_CASTLE_VERSION=1.68
@@ -429,6 +445,20 @@
   export IMPALA_OZONE_URL=${CDP_OZONE_URL-}
 fi

+
+# 如果 USE_APACHE_HADOOP 为 true,则使用 Apache 版本的 Hadoop
+if $USE_APACHE_HADOOP; then
+  # 设置 Hadoop 安装路径,这里使用现有的变量 APACHE_COMPONENTS_HOME
+  export HADOOP_HOME="${APACHE_COMPONENTS_HOME}/hadoop-${APACHE_HADOOP_VERSION}"
+
+  # 设置 Hadoop 下载 URL
+  export APACHE_HADOOP_URL="${APACHE_MIRROR}/hadoop/common/hadoop-${APACHE_HADOOP_VERSION}/hadoop-${APACHE_HADOOP_VERSION}.tar.gz"
+
+  export IMPALA_HADOOP_VERSION=${APACHE_HADOOP_VERSION}
+  export IMPALA_HADOOP_URL=${APACHE_HADOOP_URL}
+fi
+
+
 # It is important to have a coherent view of the JAVA_HOME and JAVA executable.
 # The JAVA_HOME should be determined first, then the JAVA executable should be
 # derived from JAVA_HOME. For development, it is useful to be able to specify
Index: infra/python/deps/pip_download.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/infra/python/deps/pip_download.py b/infra/python/deps/pip_download.py
--- a/infra/python/deps/pip_download.py	(revision fd92dbc6bfd9be61af6d1e3f9d105ad50f158f27)
+++ b/infra/python/deps/pip_download.py	(date 1739342464929)
@@ -22,19 +22,21 @@
 # This script requires Python 2.7+.

 from __future__ import absolute_import, division, print_function
+
 import hashlib
 import multiprocessing.pool
 import os
 import os.path
 import re
+import subprocess
 import sys
 from random import randint
 from time import sleep
-import subprocess

 NUM_DOWNLOAD_ATTEMPTS = 8

-PYPI_MIRROR = os.environ.get('PYPI_MIRROR', 'https://pypi.python.org')
+# PYPI_MIRROR = os.environ.get('PYPI_MIRROR', 'https://pypi.python.org')
+PYPI_MIRROR = os.environ.get('PYPI_MIRROR', 'https://pypi.tuna.tsinghua.edu.cn')

 # The requirement files that list all of the required packages and versions.
 REQUIREMENTS_FILES = ['requirements.txt', 'setuptools-requirements.txt',
@@ -43,120 +45,126 @@


 def check_digest(filename, algorithm, expected_digest):
-  try:
-    supported_algorithms = hashlib.algorithms_available
-  except AttributeError:
-    # Fallback to hardcoded set if hashlib.algorithms_available doesn't exist.
-    supported_algorithms = set(['md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512'])
-  if algorithm not in supported_algorithms:
-    print('Hash algorithm {0} is not supported by hashlib'.format(algorithm))
-    return False
-  h = hashlib.new(algorithm)
-  h.update(open(filename, mode='rb').read())
-  actual_digest = h.hexdigest()
-  return actual_digest == expected_digest
+    try:
+        supported_algorithms = hashlib.algorithms_available
+    except AttributeError:
+        # Fallback to hardcoded set if hashlib.algorithms_available doesn't exist.
+        supported_algorithms = set(['md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512'])
+    if algorithm not in supported_algorithms:
+        print('Hash algorithm {0} is not supported by hashlib'.format(algorithm))
+        return False
+    h = hashlib.new(algorithm)
+    h.update(open(filename, mode='rb').read())
+    actual_digest = h.hexdigest()
+    return actual_digest == expected_digest


 def retry(func):
-  '''Retry decorator.'''
+    '''Retry decorator.'''

-  def wrapper(*args, **kwargs):
-    for try_num in range(NUM_DOWNLOAD_ATTEMPTS):
-      if try_num > 0:
-        sleep_len = randint(5, 10 * 2 ** try_num)
-        print('Sleeping for {0} seconds before retrying'.format(sleep_len))
-        sleep(sleep_len)
-      try:
-        result = func(*args, **kwargs)
-        if result:
-          return result
-      except Exception as e:
-        print(e)
-    print('Download failed after several attempts.')
-    sys.exit(1)
+    def wrapper(*args, **kwargs):
+        for try_num in range(NUM_DOWNLOAD_ATTEMPTS):
+            if try_num > 0:
+                sleep_len = randint(5, 10 * 2 ** try_num)
+                print('Sleeping for {0} seconds before retrying'.format(sleep_len))
+                sleep(sleep_len)
+            try:
+                result = func(*args, **kwargs)
+                if result:
+                    return result
+            except Exception as e:
+                print(e)
+        print('Download failed after several attempts.')
+        sys.exit(1)

-  return wrapper
+    return wrapper

+
 def get_package_info(pkg_name, pkg_version):
-  '''Returns the file name, path, hash algorithm and digest of the package.'''
-  # We store the matching result in the candidates list instead of returning right away
-  # to sort them and return the first value in alphabetical order. This ensures that the
-  # same result is always returned even if the ordering changed on the server.
-  candidates = []
-  normalized_name = re.sub(r"[-_.]+", "-", pkg_name).lower()
-  url = '{0}/simple/{1}/'.format(PYPI_MIRROR, normalized_name)
-  print('Getting package info from {0}'.format(url))
-  # The web page should be in PEP 503 format (https://www.python.org/dev/peps/pep-0503/).
-  # We parse the page with regex instead of an html parser because that requires
-  # downloading an extra package before running this script. Since the HTML is guaranteed
-  # to be formatted according to PEP 503, this is acceptable.
-  pkg_info = subprocess.check_output(
-      ["wget", "-q", "-O", "-", url], universal_newlines=True)
-  regex = r'<a .*?href=\".*?packages/(.*?)#(.*?)=(.*?)\".*?>(.*?)<\/a>'
-  for match in re.finditer(regex, pkg_info):
-    path = match.group(1)
-    hash_algorithm = match.group(2)
-    digest = match.group(3)
-    file_name = match.group(4)
-    # Make sure that we consider only non Wheel archives, because those are not supported.
-    if (file_name.endswith('-{0}.tar.gz'.format(pkg_version)) or
-        file_name.endswith('-{0}.tar.bz2'.format(pkg_version)) or
-        file_name.endswith('-{0}.zip'.format(pkg_version))):
-      candidates.append((file_name, path, hash_algorithm, digest))
-  if not candidates:
-    print('Could not find archive to download for {0} {1}'.format(pkg_name, pkg_version))
-    return (None, None, None, None)
-  return sorted(candidates)[0]
+    '''Returns the file name, path, hash algorithm and digest of the package.'''
+    # We store the matching result in the candidates list instead of returning right away
+    # to sort them and return the first value in alphabetical order. This ensures that the
+    # same result is always returned even if the ordering changed on the server.
+    candidates = []
+    normalized_name = re.sub(r"[-_.]+", "-", pkg_name).lower()
+    url = '{0}/simple/{1}/'.format(PYPI_MIRROR, normalized_name)
+    print('Getting package info from {0}'.format(url))
+    # The web page should be in PEP 503 format (https://www.python.org/dev/peps/pep-0503/).
+    # We parse the page with regex instead of an html parser because that requires
+    # downloading an extra package before running this script. Since the HTML is guaranteed
+    # to be formatted according to PEP 503, this is acceptable.
+    pkg_info = subprocess.check_output(
+        # ["wget", "-q", "-O", "-", url], universal_newlines=True)
+        ["curl", "-sL", url], universal_newlines=True)
+    regex = r'<a .*?href=\".*?packages/(.*?)#(.*?)=(.*?)\".*?>(.*?)<\/a>'
+    for match in re.finditer(regex, pkg_info):
+        path = match.group(1)
+        hash_algorithm = match.group(2)
+        digest = match.group(3)
+        file_name = match.group(4)
+        # Make sure that we consider only non Wheel archives, because those are not supported.
+        if (file_name.endswith('-{0}.tar.gz'.format(pkg_version)) or
+                file_name.endswith('-{0}.tar.bz2'.format(pkg_version)) or
+                file_name.endswith('-{0}.zip'.format(pkg_version))):
+            candidates.append((file_name, path, hash_algorithm, digest))
+    if not candidates:
+        print('Could not find archive to download for {0} {1}'.format(pkg_name, pkg_version))
+        return (None, None, None, None)
+    return sorted(candidates)[0]

+
 @retry
 def download_package(pkg_name, pkg_version):
-  file_name, path, hash_algorithm, expected_digest = get_package_info(pkg_name,
-      pkg_version)
-  if not file_name:
-    return False
-  if os.path.isfile(file_name) and check_digest(file_name, hash_algorithm,
-      expected_digest):
-    print('File with matching digest already exists, skipping {0}'.format(file_name))
-    return True
-  pkg_url = '{0}/packages/{1}'.format(PYPI_MIRROR, path)
-  print('Downloading {0} from {1}'.format(file_name, pkg_url))
-  if 0 != subprocess.check_call(["wget", pkg_url, "-q", "-O", file_name]):
-    return False
-  if check_digest(file_name, hash_algorithm, expected_digest):
-    return True
-  else:
-    print('Hash digest check failed in file {0}.'.format(file_name))
-    return False
+    file_name, path, hash_algorithm, expected_digest = get_package_info(pkg_name,
+                                                                        pkg_version)
+    if not file_name:
+        return False
+    if os.path.isfile(file_name) and check_digest(file_name, hash_algorithm,
+                                                  expected_digest):
+        print('File with matching digest already exists, skipping {0}'.format(file_name))
+        return True
+    pkg_url = '{0}/packages/{1}'.format(PYPI_MIRROR, path)
+    print('Downloading {0} from {1}'.format(file_name, pkg_url))
+    # if 0 != subprocess.check_call(["wget", pkg_url, "-q", "-O", file_name]):
+    if 0 != subprocess.check_call(["curl", "-o", file_name, pkg_url]):
+        return False
+    if check_digest(file_name, hash_algorithm, expected_digest):
+        return True
+    else:
+        print('Hash digest check failed in file {0}.'.format(file_name))
+        return False

+
 def main():
-  if len(sys.argv) > 1:
-    _, pkg_name, pkg_version = sys.argv
-    download_package(pkg_name, pkg_version)
-    return
+    if len(sys.argv) > 1:
+        _, pkg_name, pkg_version = sys.argv
+        download_package(pkg_name, pkg_version)
+        return

-  pool = multiprocessing.pool.ThreadPool(processes=min(multiprocessing.cpu_count(), 4))
-  results = []
+    pool = multiprocessing.pool.ThreadPool(processes=min(multiprocessing.cpu_count(), 4))
+    results = []

-  for requirements_file in REQUIREMENTS_FILES:
-    # If the package name and version are not specified in the command line arguments,
-    # download the packages that in requirements.txt.
-    # requirements.txt follows the standard pip grammar.
-    for line in open(requirements_file):
-      # A hash symbol ("#") represents a comment that should be ignored.
-      line = line.split("#")[0]
-      # A semi colon (";") specifies some additional condition for when the package
-      # should be installed (for example a specific OS). We can ignore this and download
-      # the package anyways because the installation script(bootstrap_virtualenv.py) can
-      # take it into account.
-      l = line.split(";")[0].strip()
-      if not l:
-        continue
-      pkg_name, pkg_version = l.split('==')
-      results.append(pool.apply_async(
-        download_package, args=[pkg_name.strip(), pkg_version.strip()]))
+    for requirements_file in REQUIREMENTS_FILES:
+        # If the package name and version are not specified in the command line arguments,
+        # download the packages that in requirements.txt.
+        # requirements.txt follows the standard pip grammar.
+        for line in open(requirements_file):
+            # A hash symbol ("#") represents a comment that should be ignored.
+            line = line.split("#")[0]
+            # A semi colon (";") specifies some additional condition for when the package
+            # should be installed (for example a specific OS). We can ignore this and download
+            # the package anyways because the installation script(bootstrap_virtualenv.py) can
+            # take it into account.
+            l = line.split(";")[0].strip()
+            if not l:
+                continue
+            pkg_name, pkg_version = l.split('==')
+            results.append(pool.apply_async(
+                download_package, args=[pkg_name.strip(), pkg_version.strip()]))

-    for x in results:
-      x.get()
+        for x in results:
+            x.get()

+
 if __name__ == '__main__':
-  main()
+    main()


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
#Impala#Bigtop#镜像加速#国内适配#脚本优化
Impala_4.4.1 编译
[O] Impala 版本适配改造(二)

← Impala_4.4.1 编译 [O] Impala 版本适配改造(二)→

最近更新
01
bigtop-select 打包缺 compat 报错修复 deb
07-16
02
bigtop-select 打包缺 control 文件报错修复 deb
07-16
03
首次编译-环境初始化 必装
07-16
更多文章>
Theme by Vdoing | Copyright © 2017-2025 JaneTTR | MIT License
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式