diff --git a/docs/Caching.md b/docs/Caching.md
index 619605c30..08bedc83e 100644
--- a/docs/Caching.md
+++ b/docs/Caching.md
@@ -48,12 +48,63 @@ See https://github.com/mozilla/sccache/blob/8567bbe2ba493153e76177c1f9a6f98cc7ba
### C/C++ preprocessor
-In "preprocessor cache mode", [explained in the local doc](Local.md), an
-extra key is computed to cache the preprocessor output itself. It is very close
-to the C/C++ compiler one, but with additional elements:
+In "preprocessor cache mode" explained below, an extra key is computed to cache the preprocessor output itself.
+It is very close to the C/C++ compiler one, but with additional elements:
* The path of the input file
* The hash of the input file
Note that some compiler options can disable preprocessor cache mode. As of this
writing, only `-Xpreprocessor` and `-Wp,*` do.
+
+#### Preprocessor cache mode
+
+This is inspired by [ccache's direct mode](https://ccache.dev/manual/3.7.9.html#_the_direct_mode) and works roughly the same.
+It adds a cache that allows to skip preprocessing when compiling C/C++. This can make it much faster to return compilation results
+from cache since preprocessing is a major expense for these.
+
+Preprocessor cache mode is controlled by a configuration option which is true by default, as well as additional conditions described below.
+
+To ensure that the cached preprocessor results for a source file correspond to the un-preprocessed inputs, sccache needs
+to remember, among other things, all files included by the source file. sccache also needs to recognize
+when "external factors" may change the results, such as system time if the `__TIME__` macro is used
+in a source file. How conservative sccache is about some of these external factors is configurable, see below.
+
+Preprocessor cache mode will be disabled in any of the following cases:
+
+- Not compiling C or C++
+- The configuration option is false
+- Not using GCC or Clang
+- Not using local storage for the cache
+- Any of the compiler options `-Xpreprocessor`, `-Wp,` are present
+- The modification time of one of the header files is too new (avoids a race condition)
+- Certain strings such as `__DATE__`, `__TIME__`, `__TIMESTAMP__` are present in the source code,
+ indicating that the preprocessor result may change based on external factors
+
+The preprocessor cache may silently produce stale results in any of the following cases:
+
+- When a source file was compiled and its results were cached, a header file would have been included if it existed, but it did
+ not exist at the time. sccache does not know about such files, so it cannot invalidate the result if the header file later exists.
+- A macro such as `__TIME__` (etc) is used in the source code and `ignore_time_macros` is enabled
+- There are other external factors influencing the preprocessing result that sccache does not know about
+
+Configuration options and their default values:
+
+- `use_preprocessor_cache_mode`: `true`. Whether to use preprocessor cache mode. This can be overridden for an sccache invocation by setting the environment variable `SCCACHE_DIRECT` to `true`/`on`/`1` or `false`/`off`/`0`.
+- `file_stat_matches`: `false`. If false, only compare header files by hashing their contents. If true, will use size + ctime + mtime to check whether a file has changed. See other flags below for more control over this behavior.
+- `use_ctime_for_stat`: `true`. If true, uses the ctime (file status change on UNIX, creation time on Windows) to check that a file has/hasn't changed. Can be useful to disable when backdating modification times in a controlled manner.
+
+- `ignore_time_macros`: `false`. If true, ignore `__DATE__`, `__TIME__` and `__TIMESTAMP__` being present in the source code. Will speed up preprocessor cache mode, but can produce stale results.
+
+- `skip_system_headers`: `false`. If true, the preprocessor cache will only add the paths of included system headers to the cache key but ignore the headers' contents.
+
+- `hash_working_directory`: `true`. If true, will add the current working directory to the cache key to distinguish two compilations from different directories.
+- `max_size`: `10737418240`. The size of the preprocessor cache, defaults to the default disk cache size.
+- `rw_mode`: `ReadWrite`. ReadOnly or ReadWrite mode for the cache.
+- `dir`: `path_to_cache_directory`. Path to the preprocessor cache, By default it will use DiskCache's directory, under subdirectory `preprocessor`.
+
+See where to write the config in [the configuration doc](Configuration.md).
+
+`sccache --debug-preprocessor-cache` can be used to investigate the content of the preprocessor cache.
+
+The preprocessor cache uses random read and write; thus, certain file systems, including `s3fs`, are not supported.
\ No newline at end of file
diff --git a/docs/Configuration.md b/docs/Configuration.md
index b45d1c431..cb1bc3497 100644
--- a/docs/Configuration.md
+++ b/docs/Configuration.md
@@ -33,7 +33,7 @@ dir = "/tmp/.cache/sccache"
size = 7516192768 # 7 GiBytes
# See the local docs on more explanations about this mode
-[cache.disk.preprocessor_cache_mode]
+[cache.preprocessor_cache_mode]
# Whether to use the preprocessor cache mode
use_preprocessor_cache_mode = true
# Whether to use file times to check for changes
@@ -46,6 +46,12 @@ ignore_time_macros = false
skip_system_headers = false
# Whether hash the current working directory
hash_working_directory = true
+# Maximum size of the cache
+max_size = 1048576
+# ReadOnly/ReadWrite mode
+rw_mode = "ReadWrite"
+# Path to the cache
+dir = "/tmp/.cache/sccache-preprocess/"
[cache.gcs]
# optional oauth url
diff --git a/docs/Local.md b/docs/Local.md
index f1b4543ca..d0447b81c 100644
--- a/docs/Local.md
+++ b/docs/Local.md
@@ -6,51 +6,6 @@ The default cache size is 10 gigabytes. To change this, set `SCCACHE_CACHE_SIZE`
The local storage only supports a single sccache server at a time. Multiple concurrent servers will race and cause spurious build failures.
-## Preprocessor cache mode
-
-This is inspired by [ccache's direct mode](https://ccache.dev/manual/3.7.9.html#_the_direct_mode) and works roughly the same.
-It adds a cache that allows to skip preprocessing when compiling C/C++. This can make it much faster to return compilation results
-from cache since preprocessing is a major expense for these.
-
-Preprocessor cache mode is controlled by a configuration option which is true by default, as well as additional conditions described below.
-
-To ensure that the cached preprocessor results for a source file correspond to the un-preprocessed inputs, sccache needs
-to remember, among other things, all files included by the source file. sccache also needs to recognize
-when "external factors" may change the results, such as system time if the `__TIME__` macro is used
-in a source file. How conservative sccache is about some of these external factors is configurable, see below.
-
-Preprocessor cache mode will be disabled in any of the following cases:
-
-- Not compiling C or C++
-- The configuration option is false
-- Not using GCC or Clang
-- Not using local storage for the cache
-- Any of the compiler options `-MP`, `-Xpreprocessor`, `-Wp,` are present
-- The modification time of one of the header files is too new (avoids a race condition)
-- Certain strings such as `__DATE__`, `__TIME__`, `__TIMESTAMP__` are present in the source code,
- indicating that the preprocessor result may change based on external factors
-
-The preprocessor cache may silently produce stale results in any of the following cases:
-
-- When a source file was compiled and its results were cached, a header file would have been included if it existed, but it did
- not exist at the time. sccache does not know about such files, so it cannot invalidate the result if the header file later exists.
-- A macro such as `__TIME__` (etc) is used in the source code and `ignore_time_macros` is enabled
-- There are other external factors influencing the preprocessing result that sccache does not know about
-
-Configuration options and their default values:
-
-- `use_preprocessor_cache_mode`: `true`. Whether to use preprocessor cache mode. This can be overridden for an sccache invocation by setting the environment variable `SCCACHE_DIRECT` to `true`/`on`/`1` or `false`/`off`/`0`.
-- `file_stat_matches`: `false`. If false, only compare header files by hashing their contents. If true, will use size + ctime + mtime to check whether a file has changed. See other flags below for more control over this behavior.
-- `use_ctime_for_stat`: `true`. If true, uses the ctime (file status change on UNIX, creation time on Windows) to check that a file has/hasn't changed. Can be useful to disable when backdating modification times in a controlled manner.
-
-- `ignore_time_macros`: `false`. If true, ignore `__DATE__`, `__TIME__` and `__TIMESTAMP__` being present in the source code. Will speed up preprocessor cache mode, but can produce stale results.
-
-- `skip_system_headers`: `false`. If true, the preprocessor cache will only add the paths of included system headers to the cache key but ignore the headers' contents.
-
-- `hash_working_directory`: `true`. If true, will add the current working directory to the cache key to distinguish two compilations from different directories.
-
-See where to write the config in [the configuration doc](Configuration.md).
-
## Read-only cache mode
By default, the local cache operates in read/write mode. The `SCCACHE_LOCAL_RW_MODE` environment variable can be set to `READ_ONLY` (or `READ_WRITE`) to modify this behavior.
diff --git a/src/cache/cache.rs b/src/cache/cache.rs
index 3e883db19..67616dcfe 100644
--- a/src/cache/cache.rs
+++ b/src/cache/cache.rs
@@ -12,6 +12,9 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+use super::preprocessor_cache::PreprocessorCacheStorage;
+use super::storage::Storage;
+use crate::cache::PreprocessorCache;
#[cfg(feature = "azure")]
use crate::cache::azure::AzureBlobCache;
#[cfg(feature = "cos")]
@@ -31,715 +34,189 @@ use crate::cache::redis::RedisCache;
use crate::cache::s3::S3Cache;
#[cfg(feature = "webdav")]
use crate::cache::webdav::WebdavCache;
-use crate::compiler::PreprocessorCacheEntry;
-use crate::config::Config;
-#[cfg(any(
- feature = "azure",
- feature = "gcs",
- feature = "gha",
- feature = "memcached",
- feature = "redis",
- feature = "s3",
- feature = "webdav",
- feature = "oss",
- feature = "cos"
-))]
use crate::config::{self, CacheType};
-use async_trait::async_trait;
-use fs_err as fs;
-
-use serde::{Deserialize, Serialize};
-use std::fmt;
-use std::io::{self, Cursor, Read, Seek, Write};
-use std::path::{Path, PathBuf};
-use std::sync::Arc;
-use std::time::Duration;
-use tempfile::NamedTempFile;
-use zip::write::FileOptions;
-use zip::{CompressionMethod, ZipArchive, ZipWriter};
-
+use crate::config::{Config, DiskCacheConfig};
use crate::errors::*;
+use std::sync::Arc;
-#[cfg(unix)]
-fn get_file_mode(file: &fs::File) -> Result> {
- use std::os::unix::fs::MetadataExt;
- Ok(Some(file.metadata()?.mode()))
-}
-
-#[cfg(windows)]
-#[allow(clippy::unnecessary_wraps)]
-fn get_file_mode(_file: &fs::File) -> Result > {
- Ok(None)
-}
-
-#[cfg(unix)]
-fn set_file_mode(path: &Path, mode: u32) -> Result<()> {
- use std::fs::Permissions;
- use std::os::unix::fs::PermissionsExt;
- let p = Permissions::from_mode(mode);
- fs::set_permissions(path, p)?;
- Ok(())
-}
-
-#[cfg(windows)]
-#[allow(clippy::unnecessary_wraps)]
-fn set_file_mode(_path: &Path, _mode: u32) -> Result<()> {
- Ok(())
-}
-
-/// Cache object sourced by a file.
-#[derive(Clone)]
-pub struct FileObjectSource {
- /// Identifier for this object. Should be unique within a compilation unit.
- /// Note that a compilation unit is a single source file in C/C++ and a crate in Rust.
- pub key: String,
- /// Absolute path to the file.
- pub path: PathBuf,
- /// Whether the file must be present on disk and is essential for the compilation.
- pub optional: bool,
-}
-
-/// Result of a cache lookup.
-pub enum Cache {
- /// Result was found in cache.
- Hit(CacheRead),
- /// Result was not found in cache.
- Miss,
- /// Do not cache the results of the compilation.
- None,
- /// Cache entry should be ignored, force compilation.
- Recache,
-}
-
-impl fmt::Debug for Cache {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- match *self {
- Cache::Hit(_) => write!(f, "Cache::Hit(...)"),
- Cache::Miss => write!(f, "Cache::Miss"),
- Cache::None => write!(f, "Cache::None"),
- Cache::Recache => write!(f, "Cache::Recache"),
- }
- }
-}
-
-/// CacheMode is used to represent which mode we are using.
-#[derive(Copy, Clone, Debug, PartialEq, Eq)]
-pub enum CacheMode {
- /// Only read cache from storage.
- ReadOnly,
- /// Full support of cache storage: read and write.
- ReadWrite,
-}
-
-/// Trait objects can't be bounded by more than one non-builtin trait.
-pub trait ReadSeek: Read + Seek + Send {}
-
-impl ReadSeek for T {}
-
-/// Data stored in the compiler cache.
-pub struct CacheRead {
- zip: ZipArchive>,
-}
-
-/// Represents a failure to decompress stored object data.
-#[derive(Debug)]
-pub struct DecompressionFailure;
-
-impl std::fmt::Display for DecompressionFailure {
- fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
- write!(f, "failed to decompress content")
- }
-}
-
-impl std::error::Error for DecompressionFailure {}
-
-impl CacheRead {
- /// Create a cache entry from `reader`.
- pub fn from(reader: R) -> Result
- where
- R: ReadSeek + 'static,
- {
- let z = ZipArchive::new(Box::new(reader) as Box)
- .context("Failed to parse cache entry")?;
- Ok(CacheRead { zip: z })
- }
-
- /// Get an object from this cache entry at `name` and write it to `to`.
- /// If the file has stored permissions, return them.
- pub fn get_object(&mut self, name: &str, to: &mut T) -> Result>
- where
- T: Write,
- {
- let file = self.zip.by_name(name).or(Err(DecompressionFailure))?;
- if file.compression() != CompressionMethod::Stored {
- bail!(DecompressionFailure);
- }
- let mode = file.unix_mode();
- zstd::stream::copy_decode(file, to).or(Err(DecompressionFailure))?;
- Ok(mode)
- }
-
- /// Get the stdout from this cache entry, if it exists.
- pub fn get_stdout(&mut self) -> Vec {
- self.get_bytes("stdout")
- }
-
- /// Get the stderr from this cache entry, if it exists.
- pub fn get_stderr(&mut self) -> Vec {
- self.get_bytes("stderr")
- }
-
- fn get_bytes(&mut self, name: &str) -> Vec {
- let mut bytes = Vec::new();
- drop(self.get_object(name, &mut bytes));
- bytes
- }
-
- pub async fn extract_objects(
- mut self,
- objects: T,
- pool: &tokio::runtime::Handle,
- ) -> Result<()>
- where
- T: IntoIterator- + Send + Sync + 'static,
- {
- pool.spawn_blocking(move || {
- for FileObjectSource {
- key,
- path,
- optional,
- } in objects
- {
- let dir = match path.parent() {
- Some(d) => d,
- None => bail!("Output file without a parent directory!"),
- };
- // Write the cache entry to a tempfile and then atomically
- // move it to its final location so that other rustc invocations
- // happening in parallel don't see a partially-written file.
- let mut tmp = NamedTempFile::new_in(dir)?;
- match (self.get_object(&key, &mut tmp), optional) {
- (Ok(mode), _) => {
- tmp.persist(&path)?;
- if let Some(mode) = mode {
- set_file_mode(&path, mode)?;
- }
- }
- (Err(e), false) => return Err(e),
- // skip if no object found and it's optional
- (Err(_), true) => continue,
- }
- }
- Ok(())
- })
- .await?
- }
+#[cfg(feature = "azure")]
+fn get_azure_storage(config: &config::AzureCacheConfig) -> Result
> {
+ debug!(
+ "Init azure cache with container {}, key_prefix {}",
+ config.container, config.key_prefix
+ );
+ let storage = AzureBlobCache::build(
+ &config.connection_string,
+ &config.container,
+ &config.key_prefix,
+ )
+ .map_err(|err| anyhow!("create azure cache failed: {err:?}"))?;
+ Ok(Arc::new(storage))
}
-/// Data to be stored in the compiler cache.
-pub struct CacheWrite {
- zip: ZipWriter>>,
+#[cfg(feature = "gcs")]
+fn get_gcs_storage(config: &config::GCSCacheConfig) -> Result> {
+ debug!(
+ "Init gcs cache with bucket {}, key_prefix {}",
+ config.bucket, config.key_prefix
+ );
+
+ let storage = GCSCache::build(
+ &config.bucket,
+ &config.key_prefix,
+ config.cred_path.as_deref(),
+ config.service_account.as_deref(),
+ config.rw_mode.into(),
+ config.credential_url.as_deref(),
+ )
+ .map_err(|err| anyhow!("create gcs cache failed: {err:?}"))?;
+
+ Ok(Arc::new(storage))
}
-impl CacheWrite {
- /// Create a new, empty cache entry.
- pub fn new() -> CacheWrite {
- CacheWrite {
- zip: ZipWriter::new(io::Cursor::new(vec![])),
- }
- }
-
- /// Create a new cache entry populated with the contents of `objects`.
- pub async fn from_objects(objects: T, pool: &tokio::runtime::Handle) -> Result
- where
- T: IntoIterator- + Send + Sync + 'static,
- {
- pool.spawn_blocking(move || {
- let mut entry = CacheWrite::new();
- for FileObjectSource {
- key,
- path,
- optional,
- } in objects
- {
- let f = fs::File::open(&path)
- .with_context(|| format!("failed to open file `{:?}`", path));
- match (f, optional) {
- (Ok(mut f), _) => {
- let mode = get_file_mode(&f)?;
- entry.put_object(&key, &mut f, mode).with_context(|| {
- format!("failed to put object `{:?}` in cache entry", path)
- })?;
- }
- (Err(e), false) => return Err(e),
- (Err(_), true) => continue,
- }
- }
- Ok(entry)
- })
- .await?
- }
-
- /// Add an object containing the contents of `from` to this cache entry at `name`.
- /// If `mode` is `Some`, store the file entry with that mode.
- pub fn put_object
(&mut self, name: &str, from: &mut T, mode: Option) -> Result<()>
- where
- T: Read,
- {
- // We're going to declare the compression method as "stored",
- // but we're actually going to store zstd-compressed blobs.
- let opts = FileOptions::default().compression_method(CompressionMethod::Stored);
- let opts = if let Some(mode) = mode {
- opts.unix_permissions(mode)
- } else {
- opts
- };
- self.zip
- .start_file(name, opts)
- .context("Failed to start cache entry object")?;
-
- let compression_level = std::env::var("SCCACHE_CACHE_ZSTD_LEVEL")
- .ok()
- .and_then(|value| value.parse::().ok())
- .unwrap_or(3);
- zstd::stream::copy_encode(from, &mut self.zip, compression_level)?;
- Ok(())
- }
-
- pub fn put_stdout(&mut self, bytes: &[u8]) -> Result<()> {
- self.put_bytes("stdout", bytes)
- }
-
- pub fn put_stderr(&mut self, bytes: &[u8]) -> Result<()> {
- self.put_bytes("stderr", bytes)
- }
-
- fn put_bytes(&mut self, name: &str, bytes: &[u8]) -> Result<()> {
- if !bytes.is_empty() {
- let mut cursor = Cursor::new(bytes);
- return self.put_object(name, &mut cursor, None);
- }
- Ok(())
- }
-
- /// Finish writing data to the cache entry writer, and return the data.
- pub fn finish(self) -> Result> {
- let CacheWrite { mut zip } = self;
- let cur = zip.finish().context("Failed to finish cache entry zip")?;
- Ok(cur.into_inner())
- }
-}
+#[cfg(feature = "gha")]
+fn get_gha_storage(config: &config::GHACacheConfig) -> Result> {
+ debug!("Init gha cache with version {}", config.version);
-impl Default for CacheWrite {
- fn default() -> Self {
- Self::new()
- }
+ let storage = GHACache::build(&config.version)
+ .map_err(|err| anyhow!("create gha cache failed: {err:?}"))?;
+ Ok(Arc::new(storage))
}
-/// An interface to cache storage.
-#[async_trait]
-pub trait Storage: Send + Sync {
- /// Get a cache entry by `key`.
- ///
- /// If an error occurs, this method should return a `Cache::Error`.
- /// If nothing fails but the entry is not found in the cache,
- /// it should return a `Cache::Miss`.
- /// If the entry is successfully found in the cache, it should
- /// return a `Cache::Hit`.
- async fn get(&self, key: &str) -> Result;
-
- /// Put `entry` in the cache under `key`.
- ///
- /// Returns a `Future` that will provide the result or error when the put is
- /// finished.
- async fn put(&self, key: &str, entry: CacheWrite) -> Result;
-
- /// Check the cache capability.
- ///
- /// - `Ok(CacheMode::ReadOnly)` means cache can only be used to `get`
- /// cache.
- /// - `Ok(CacheMode::ReadWrite)` means cache can do both `get` and `put`.
- /// - `Err(err)` means cache is not setup correctly or not match with
- /// users input (for example, user try to use `ReadWrite` but cache
- /// is `ReadOnly`).
- ///
- /// We will provide a default implementation which returns
- /// `Ok(CacheMode::ReadWrite)` for service that doesn't
- /// support check yet.
- async fn check(&self) -> Result {
- Ok(CacheMode::ReadWrite)
- }
-
- /// Get the storage location.
- fn location(&self) -> String;
-
- /// Get the current storage usage, if applicable.
- async fn current_size(&self) -> Result>;
-
- /// Get the maximum storage size, if applicable.
- async fn max_size(&self) -> Result >;
-
- /// Return the config for preprocessor cache mode if applicable
- fn preprocessor_cache_mode_config(&self) -> PreprocessorCacheModeConfig {
- // Enable by default, only in local mode
- PreprocessorCacheModeConfig::default()
- }
- /// Return the preprocessor cache entry for a given preprocessor key,
- /// if it exists.
- /// Only applicable when using preprocessor cache mode.
- async fn get_preprocessor_cache_entry(
- &self,
- _key: &str,
- ) -> Result >> {
- Ok(None)
- }
- /// Insert a preprocessor cache entry at the given preprocessor key,
- /// overwriting the entry if it exists.
- /// Only applicable when using preprocessor cache mode.
- async fn put_preprocessor_cache_entry(
- &self,
- _key: &str,
- _preprocessor_cache_entry: PreprocessorCacheEntry,
- ) -> Result<()> {
- Ok(())
- }
+#[cfg(feature = "memcached")]
+fn get_memcached_storage(config: &config::MemcachedCacheConfig) -> Result> {
+ debug!("Init memcached cache with url {}", config.url);
+ let storage = MemcachedCache::build(
+ &config.url,
+ config.username.as_deref(),
+ config.password.as_deref(),
+ &config.key_prefix,
+ config.expiration,
+ )
+ .map_err(|err| anyhow!("create memcached cache failed: {err:?}"))?;
+ Ok(Arc::new(storage))
}
-/// Configuration switches for preprocessor cache mode.
-#[derive(Debug, Copy, Clone, PartialEq, Eq, Serialize, Deserialize)]
-#[serde(deny_unknown_fields)]
-#[serde(default)]
-pub struct PreprocessorCacheModeConfig {
- /// Whether to use preprocessor cache mode entirely
- pub use_preprocessor_cache_mode: bool,
- /// If false (default), only compare header files by hashing their contents.
- /// If true, will use size + ctime + mtime to check whether a file has changed.
- /// See other flags below for more control over this behavior.
- pub file_stat_matches: bool,
- /// If true (default), uses the ctime (file status change on UNIX,
- /// creation time on Windows) to check that a file has/hasn't changed.
- /// Can be useful to disable when backdating modification times
- /// in a controlled manner.
- pub use_ctime_for_stat: bool,
- /// If true, ignore `__DATE__`, `__TIME__` and `__TIMESTAMP__` being present
- /// in the source code. Will speed up preprocessor cache mode,
- /// but can result in false positives.
- pub ignore_time_macros: bool,
- /// If true, preprocessor cache mode will not cache system headers, only
- /// add them to the hash.
- pub skip_system_headers: bool,
- /// If true (default), will add the current working directory in the hash to
- /// distinguish two compilations from different directories.
- pub hash_working_directory: bool,
+#[cfg(feature = "redis")]
+fn get_redis_storage(config: &config::RedisCacheConfig) -> Result> {
+ debug!("Init redis cache with endpoint {:?}", config.endpoint);
+ let storage = RedisCache::build_single(
+ config
+ .endpoint
+ .as_ref()
+ .ok_or_else(|| anyhow!("redis endpoint is required"))?,
+ config.username.as_deref(),
+ config.password.as_deref(),
+ config.db,
+ &config.key_prefix,
+ config.ttl,
+ )
+ .map_err(|err| anyhow!("create redis cache failed: {err:?}"))?;
+ Ok(Arc::new(storage))
}
-impl Default for PreprocessorCacheModeConfig {
- fn default() -> Self {
- Self {
- use_preprocessor_cache_mode: false,
- file_stat_matches: false,
- use_ctime_for_stat: true,
- ignore_time_macros: false,
- skip_system_headers: false,
- hash_working_directory: true,
- }
- }
+#[cfg(feature = "s3")]
+fn get_s3_storage(config: &config::S3CacheConfig) -> Result> {
+ debug!(
+ "Init s3 cache with bucket {}, endpoint {:?}",
+ config.bucket, config.endpoint
+ );
+ let storage_builder = S3Cache::new(
+ config.bucket.clone(),
+ config.key_prefix.clone(),
+ config.no_credentials,
+ );
+ let storage = storage_builder
+ .with_region(config.region.clone())
+ .with_endpoint(config.endpoint.clone())
+ .with_use_ssl(config.use_ssl)
+ .with_server_side_encryption(config.server_side_encryption)
+ .with_enable_virtual_host_style(config.enable_virtual_host_style)
+ .build()
+ .map_err(|err| anyhow!("create s3 cache failed: {err:?}"))?;
+
+ Ok(Arc::new(storage))
}
-impl PreprocessorCacheModeConfig {
- /// Return a default [`Self`], but with the cache active.
- pub fn activated() -> Self {
- Self {
- use_preprocessor_cache_mode: true,
- ..Default::default()
- }
- }
+#[cfg(feature = "webdav")]
+fn get_webdav_storage(config: &config::WebdavCacheConfig) -> Result> {
+ debug!("Init webdav cache with endpoint {}", config.endpoint);
+ let storage = WebdavCache::build(
+ &config.endpoint,
+ &config.key_prefix,
+ config.username.as_deref(),
+ config.password.as_deref(),
+ config.token.as_deref(),
+ )
+ .map_err(|err| anyhow!("create webdav cache failed: {err:?}"))?;
+ Ok(Arc::new(storage))
}
-/// Implement storage for operator.
-#[cfg(any(
- feature = "azure",
- feature = "gcs",
- feature = "gha",
- feature = "memcached",
- feature = "redis",
- feature = "s3",
- feature = "webdav",
-))]
-#[async_trait]
-impl Storage for opendal::Operator {
- async fn get(&self, key: &str) -> Result {
- match self.read(&normalize_key(key)).await {
- Ok(res) => {
- let hit = CacheRead::from(io::Cursor::new(res.to_bytes()))?;
- Ok(Cache::Hit(hit))
- }
- Err(e) if e.kind() == opendal::ErrorKind::NotFound => Ok(Cache::Miss),
- Err(e) => {
- warn!("Got unexpected error: {:?}", e);
- Ok(Cache::Miss)
- }
- }
- }
-
- async fn put(&self, key: &str, entry: CacheWrite) -> Result {
- let start = std::time::Instant::now();
-
- self.write(&normalize_key(key), entry.finish()?).await?;
-
- Ok(start.elapsed())
- }
-
- async fn check(&self) -> Result {
- use opendal::ErrorKind;
-
- let path = ".sccache_check";
-
- // Read is required, return error directly if we can't read .
- match self.read(path).await {
- Ok(_) => (),
- // Read not exist file with not found is ok.
- Err(err) if err.kind() == ErrorKind::NotFound => (),
- // Tricky Part.
- //
- // We tolerate rate limited here to make sccache keep running.
- // For the worse case, we will miss all the cache.
- //
- // In some super rare cases, user could configure storage in wrong
- // and hitting other services rate limit. There are few things we
- // can do, so we will print our the error here to make users know
- // about it.
- Err(err) if err.kind() == ErrorKind::RateLimited => {
- eprintln!("cache storage read check: {err:?}, but we decide to keep running");
- }
- Err(err) => bail!("cache storage failed to read: {:?}", err),
- }
-
- let can_write = match self.write(path, "Hello, World!").await {
- Ok(_) => true,
- Err(err) if err.kind() == ErrorKind::AlreadyExists => true,
- // Tolerate all other write errors because we can do read at least.
- Err(err) => {
- eprintln!("storage write check failed: {err:?}");
- false
- }
- };
-
- let mode = if can_write {
- CacheMode::ReadWrite
- } else {
- CacheMode::ReadOnly
- };
-
- debug!("storage check result: {mode:?}");
-
- Ok(mode)
- }
-
- fn location(&self) -> String {
- let meta = self.info();
- format!(
- "{}, name: {}, prefix: {}",
- meta.scheme(),
- meta.name(),
- meta.root()
- )
- }
-
- async fn current_size(&self) -> Result> {
- Ok(None)
- }
-
- async fn max_size(&self) -> Result > {
- Ok(None)
- }
+#[cfg(feature = "oss")]
+fn get_oss_storage(config: &config::OSSCacheConfig) -> Result> {
+ debug!(
+ "Init oss cache with bucket {}, endpoint {:?}",
+ config.bucket, config.endpoint
+ );
+ let storage = OSSCache::build(
+ &config.bucket,
+ &config.key_prefix,
+ config.endpoint.as_deref(),
+ config.no_credentials,
+ )
+ .map_err(|err| anyhow!("create oss cache failed: {err:?}"))?;
+ Ok(Arc::new(storage))
}
-/// Normalize key `abcdef` into `a/b/c/abcdef`
-pub(in crate::cache) fn normalize_key(key: &str) -> String {
- format!("{}/{}/{}/{}", &key[0..1], &key[1..2], &key[2..3], &key)
+#[cfg(feature = "cos")]
+fn get_cos_storage(config: &config::COSCacheConfig) -> Result> {
+ debug!(
+ "Init cos cache with bucket {}, endpoint {:?}",
+ config.bucket, config.endpoint
+ );
+ let storage = COSCache::build(
+ &config.bucket,
+ &config.key_prefix,
+ config.endpoint.as_deref(),
+ )
+ .map_err(|err| anyhow!("create cos cache failed: {err:?}"))?;
+ Ok(Arc::new(storage))
}
-/// Get a suitable `Storage` implementation from configuration.
-#[allow(clippy::cognitive_complexity)] // TODO simplify!
-pub fn storage_from_config(
- config: &Config,
+fn get_disk_storage(
+ config: &DiskCacheConfig,
pool: &tokio::runtime::Handle,
) -> Result> {
+ let (dir, size) = (&config.dir, config.size);
+ let rw_mode = config.rw_mode.into();
+ debug!("Init disk cache with dir {:?}, size {}", dir, size);
+ Ok(Arc::new(DiskCache::new(dir, size, pool, rw_mode)))
+}
+
+/// Get a suitable cache `Storage` implementation from configuration.
+fn get_storage(config: &Config, pool: &tokio::runtime::Handle) -> Result> {
if let Some(cache_type) = &config.cache {
match cache_type {
#[cfg(feature = "azure")]
- CacheType::Azure(config::AzureCacheConfig {
- connection_string,
- container,
- key_prefix,
- }) => {
- debug!("Init azure cache with container {container}, key_prefix {key_prefix}");
- let storage = AzureBlobCache::build(connection_string, container, key_prefix)
- .map_err(|err| anyhow!("create azure cache failed: {err:?}"))?;
- return Ok(Arc::new(storage));
- }
+ CacheType::Azure(azure_config) => return get_azure_storage(azure_config),
#[cfg(feature = "gcs")]
- CacheType::GCS(config::GCSCacheConfig {
- bucket,
- key_prefix,
- cred_path,
- rw_mode,
- service_account,
- credential_url,
- }) => {
- debug!("Init gcs cache with bucket {bucket}, key_prefix {key_prefix}");
-
- let storage = GCSCache::build(
- bucket,
- key_prefix,
- cred_path.as_deref(),
- service_account.as_deref(),
- (*rw_mode).into(),
- credential_url.as_deref(),
- )
- .map_err(|err| anyhow!("create gcs cache failed: {err:?}"))?;
-
- return Ok(Arc::new(storage));
- }
+ CacheType::GCS(gcs_config) => return get_gcs_storage(gcs_config),
#[cfg(feature = "gha")]
- CacheType::GHA(config::GHACacheConfig { version, .. }) => {
- debug!("Init gha cache with version {version}");
-
- let storage = GHACache::build(version)
- .map_err(|err| anyhow!("create gha cache failed: {err:?}"))?;
- return Ok(Arc::new(storage));
- }
+ CacheType::GHA(gha_config) => return get_gha_storage(gha_config),
#[cfg(feature = "memcached")]
- CacheType::Memcached(config::MemcachedCacheConfig {
- url,
- username,
- password,
- expiration,
- key_prefix,
- }) => {
- debug!("Init memcached cache with url {url}");
-
- let storage = MemcachedCache::build(
- url,
- username.as_deref(),
- password.as_deref(),
- key_prefix,
- *expiration,
- )
- .map_err(|err| anyhow!("create memcached cache failed: {err:?}"))?;
- return Ok(Arc::new(storage));
+ CacheType::Memcached(memcached_config) => {
+ return get_memcached_storage(memcached_config);
}
#[cfg(feature = "redis")]
- CacheType::Redis(config::RedisCacheConfig {
- endpoint,
- cluster_endpoints,
- username,
- password,
- db,
- url,
- ttl,
- key_prefix,
- }) => {
- let storage = match (endpoint, cluster_endpoints, url) {
- (Some(url), None, None) => {
- debug!("Init redis single-node cache with url {url}");
- RedisCache::build_single(
- url,
- username.as_deref(),
- password.as_deref(),
- *db,
- key_prefix,
- *ttl,
- )
- }
- (None, Some(urls), None) => {
- debug!("Init redis cluster cache with urls {urls}");
- RedisCache::build_cluster(
- urls,
- username.as_deref(),
- password.as_deref(),
- *db,
- key_prefix,
- *ttl,
- )
- }
- (None, None, Some(url)) => {
- warn!("Init redis single-node cache from deprecated API with url {url}");
- if username.is_some() || password.is_some() || *db != crate::config::DEFAULT_REDIS_DB {
- bail!("`username`, `password` and `db` has no effect when `url` is set. Please use `endpoint` or `cluster_endpoints` for new API accessing");
- }
-
- RedisCache::build_from_url(url, key_prefix, *ttl)
- }
- _ => bail!("Only one of `endpoint`, `cluster_endpoints`, `url` must be set"),
- }
- .map_err(|err| anyhow!("create redis cache failed: {err:?}"))?;
- return Ok(Arc::new(storage));
- }
+ CacheType::Redis(redis_config) => return get_redis_storage(redis_config),
#[cfg(feature = "s3")]
- CacheType::S3(c) => {
- debug!(
- "Init s3 cache with bucket {}, endpoint {:?}",
- c.bucket, c.endpoint
- );
- let storage_builder =
- S3Cache::new(c.bucket.clone(), c.key_prefix.clone(), c.no_credentials);
- let storage = storage_builder
- .with_region(c.region.clone())
- .with_endpoint(c.endpoint.clone())
- .with_use_ssl(c.use_ssl)
- .with_server_side_encryption(c.server_side_encryption)
- .with_enable_virtual_host_style(c.enable_virtual_host_style)
- .build()
- .map_err(|err| anyhow!("create s3 cache failed: {err:?}"))?;
-
- return Ok(Arc::new(storage));
- }
+ CacheType::S3(s3_config) => return get_s3_storage(s3_config),
#[cfg(feature = "webdav")]
- CacheType::Webdav(c) => {
- debug!("Init webdav cache with endpoint {}", c.endpoint);
-
- let storage = WebdavCache::build(
- &c.endpoint,
- &c.key_prefix,
- c.username.as_deref(),
- c.password.as_deref(),
- c.token.as_deref(),
- )
- .map_err(|err| anyhow!("create webdav cache failed: {err:?}"))?;
-
- return Ok(Arc::new(storage));
- }
+ CacheType::Webdav(webdav_config) => return get_webdav_storage(webdav_config),
#[cfg(feature = "oss")]
- CacheType::OSS(c) => {
- debug!(
- "Init oss cache with bucket {}, endpoint {:?}",
- c.bucket, c.endpoint
- );
-
- let storage = OSSCache::build(
- &c.bucket,
- &c.key_prefix,
- c.endpoint.as_deref(),
- c.no_credentials,
- )
- .map_err(|err| anyhow!("create oss cache failed: {err:?}"))?;
-
- return Ok(Arc::new(storage));
- }
+ CacheType::OSS(oss_config) => return get_oss_storage(oss_config),
#[cfg(feature = "cos")]
- CacheType::COS(c) => {
- debug!(
- "Init cos cache with bucket {}, endpoint {:?}",
- c.bucket, c.endpoint
- );
-
- let storage = COSCache::build(&c.bucket, &c.key_prefix, c.endpoint.as_deref())
- .map_err(|err| anyhow!("create cos cache failed: {err:?}"))?;
-
- return Ok(Arc::new(storage));
- }
+ CacheType::COS(c) => return get_cos_storage(c),
#[allow(unreachable_patterns)]
// if we build only with `cargo build --no-default-features`
// we only want to use sccache with a local cache (no remote storage)
@@ -747,31 +224,32 @@ pub fn storage_from_config(
}
}
- let (dir, size) = (&config.fallback_cache.dir, config.fallback_cache.size);
- let preprocessor_cache_mode_config = config.fallback_cache.preprocessor_cache_mode;
- let rw_mode = config.fallback_cache.rw_mode.into();
- debug!("Init disk cache with dir {:?}, size {}", dir, size);
- Ok(Arc::new(DiskCache::new(
- dir,
- size,
- pool,
- preprocessor_cache_mode_config,
- rw_mode,
- )))
+ get_disk_storage(&config.fallback_cache, pool)
+}
+
+/// Get preprocessor cache storage from configuration.
+fn get_preprocessor_cache_storage(config: &Config) -> Result> {
+ Ok(Arc::new(PreprocessorCache::new(&config.preprocessor_cache)))
+}
+
+/// Get both general cache storage and preprocessor cache storage from configuration.
+pub fn get_storage_from_config(
+ config: &Config,
+ pool: &tokio::runtime::Handle,
+) -> Result<(Arc, Arc)> {
+ Ok((
+ get_storage(config, pool)?,
+ get_preprocessor_cache_storage(config)?,
+ ))
}
#[cfg(test)]
mod test {
use super::*;
+ use crate::cache::CacheWrite;
+ use crate::compiler::PreprocessorCacheEntry;
use crate::config::CacheModeConfig;
-
- #[test]
- fn test_normalize_key() {
- assert_eq!(
- normalize_key("0123456789abcdef0123456789abcdef"),
- "0/1/2/0123456789abcdef0123456789abcdef"
- );
- }
+ use fs_err as fs;
#[test]
fn test_read_write_mode_local() {
@@ -801,11 +279,12 @@ mod test {
config.fallback_cache.rw_mode = CacheModeConfig::ReadWrite;
{
- let cache = storage_from_config(&config, runtime.handle()).unwrap();
+ let (cache, preprocessor_cache) =
+ get_storage_from_config(&config, runtime.handle()).unwrap();
runtime.block_on(async move {
cache.put("test1", CacheWrite::default()).await.unwrap();
- cache
+ preprocessor_cache
.put_preprocessor_cache_entry("test1", PreprocessorCacheEntry::default())
.await
.unwrap();
@@ -814,9 +293,11 @@ mod test {
// Test Read-only
config.fallback_cache.rw_mode = CacheModeConfig::ReadOnly;
+ config.preprocessor_cache.rw_mode = CacheModeConfig::ReadOnly;
{
- let cache = storage_from_config(&config, runtime.handle()).unwrap();
+ let (cache, preprocessor_cache) =
+ get_storage_from_config(&config, runtime.handle()).unwrap();
runtime.block_on(async move {
assert_eq!(
@@ -828,7 +309,7 @@ mod test {
"Cannot write to a read-only cache"
);
assert_eq!(
- cache
+ preprocessor_cache
.put_preprocessor_cache_entry("test1", PreprocessorCacheEntry::default())
.await
.unwrap_err()
diff --git a/src/cache/cache_io.rs b/src/cache/cache_io.rs
new file mode 100644
index 000000000..c4c2fa134
--- /dev/null
+++ b/src/cache/cache_io.rs
@@ -0,0 +1,270 @@
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+use super::utils::{get_file_mode, set_file_mode};
+use crate::errors::*;
+use fs_err as fs;
+use std::fmt;
+use std::io::{Cursor, Read, Seek, Write};
+use std::path::PathBuf;
+use tempfile::NamedTempFile;
+use zip::write::FileOptions;
+use zip::{CompressionMethod, ZipArchive, ZipWriter};
+
+/// Cache object sourced by a file.
+#[derive(Clone)]
+pub struct FileObjectSource {
+ /// Identifier for this object. Should be unique within a compilation unit.
+ /// Note that a compilation unit is a single source file in C/C++ and a crate in Rust.
+ pub key: String,
+ /// Absolute path to the file.
+ pub path: PathBuf,
+ /// Whether the file must be present on disk and is essential for the compilation.
+ pub optional: bool,
+}
+
+/// Result of a cache lookup.
+pub enum Cache {
+ /// Result was found in cache.
+ Hit(CacheRead),
+ /// Result was not found in cache.
+ Miss,
+ /// Do not cache the results of the compilation.
+ None,
+ /// Cache entry should be ignored, force compilation.
+ Recache,
+}
+
+impl fmt::Debug for Cache {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ match *self {
+ Cache::Hit(_) => write!(f, "Cache::Hit(...)"),
+ Cache::Miss => write!(f, "Cache::Miss"),
+ Cache::None => write!(f, "Cache::None"),
+ Cache::Recache => write!(f, "Cache::Recache"),
+ }
+ }
+}
+
+/// CacheMode is used to represent which mode we are using.
+#[derive(Copy, Clone, Debug, PartialEq, Eq)]
+pub enum CacheMode {
+ /// Only read cache from storage.
+ ReadOnly,
+ /// Full support of cache storage: read and write.
+ ReadWrite,
+}
+
+/// Trait objects can't be bounded by more than one non-builtin trait.
+pub trait ReadSeek: Read + Seek + Send {}
+
+impl ReadSeek for T {}
+
+/// Data stored in the compiler cache.
+pub struct CacheRead {
+ zip: ZipArchive>,
+}
+
+/// Represents a failure to decompress stored object data.
+#[derive(Debug)]
+pub struct DecompressionFailure;
+
+impl std::fmt::Display for DecompressionFailure {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ write!(f, "failed to decompress content")
+ }
+}
+
+impl std::error::Error for DecompressionFailure {}
+
+impl CacheRead {
+ /// Create a cache entry from `reader`.
+ pub fn from(reader: R) -> Result
+ where
+ R: ReadSeek + 'static,
+ {
+ let z = ZipArchive::new(Box::new(reader) as Box)
+ .context("Failed to parse cache entry")?;
+ Ok(CacheRead { zip: z })
+ }
+
+ /// Get an object from this cache entry at `name` and write it to `to`.
+ /// If the file has stored permissions, return them.
+ pub fn get_object(&mut self, name: &str, to: &mut T) -> Result>
+ where
+ T: Write,
+ {
+ let file = self.zip.by_name(name).or(Err(DecompressionFailure))?;
+ if file.compression() != CompressionMethod::Stored {
+ bail!(DecompressionFailure);
+ }
+ let mode = file.unix_mode();
+ zstd::stream::copy_decode(file, to).or(Err(DecompressionFailure))?;
+ Ok(mode)
+ }
+
+ /// Get the stdout from this cache entry, if it exists.
+ pub fn get_stdout(&mut self) -> Vec {
+ self.get_bytes("stdout")
+ }
+
+ /// Get the stderr from this cache entry, if it exists.
+ pub fn get_stderr(&mut self) -> Vec {
+ self.get_bytes("stderr")
+ }
+
+ fn get_bytes(&mut self, name: &str) -> Vec {
+ let mut bytes = Vec::new();
+ drop(self.get_object(name, &mut bytes));
+ bytes
+ }
+
+ pub async fn extract_objects(
+ mut self,
+ objects: T,
+ pool: &tokio::runtime::Handle,
+ ) -> Result<()>
+ where
+ T: IntoIterator- + Send + Sync + 'static,
+ {
+ pool.spawn_blocking(move || {
+ for FileObjectSource {
+ key,
+ path,
+ optional,
+ } in objects
+ {
+ let dir = match path.parent() {
+ Some(d) => d,
+ None => bail!("Output file without a parent directory!"),
+ };
+ // Write the cache entry to a tempfile and then atomically
+ // move it to its final location so that other rustc invocations
+ // happening in parallel don't see a partially-written file.
+ let mut tmp = NamedTempFile::new_in(dir)?;
+ match (self.get_object(&key, &mut tmp), optional) {
+ (Ok(mode), _) => {
+ tmp.persist(&path)?;
+ if let Some(mode) = mode {
+ set_file_mode(&path, mode)?;
+ }
+ }
+ (Err(e), false) => return Err(e),
+ // skip if no object found and it's optional
+ (Err(_), true) => continue,
+ }
+ }
+ Ok(())
+ })
+ .await?
+ }
+}
+
+/// Data to be stored in the compiler cache.
+pub struct CacheWrite {
+ zip: ZipWriter
>>,
+}
+
+impl CacheWrite {
+ /// Create a new, empty cache entry.
+ pub fn new() -> CacheWrite {
+ CacheWrite {
+ zip: ZipWriter::new(Cursor::new(vec![])),
+ }
+ }
+
+ /// Create a new cache entry populated with the contents of `objects`.
+ pub async fn from_objects(objects: T, pool: &tokio::runtime::Handle) -> Result
+ where
+ T: IntoIterator- + Send + Sync + 'static,
+ {
+ pool.spawn_blocking(move || {
+ let mut entry = CacheWrite::new();
+ for FileObjectSource {
+ key,
+ path,
+ optional,
+ } in objects
+ {
+ let f = fs::File::open(&path)
+ .with_context(|| format!("failed to open file `{:?}`", path));
+ match (f, optional) {
+ (Ok(mut f), _) => {
+ let mode = get_file_mode(&f)?;
+ entry.put_object(&key, &mut f, mode).with_context(|| {
+ format!("failed to put object `{:?}` in cache entry", path)
+ })?;
+ }
+ (Err(e), false) => return Err(e),
+ (Err(_), true) => continue,
+ }
+ }
+ Ok(entry)
+ })
+ .await?
+ }
+
+ /// Add an object containing the contents of `from` to this cache entry at `name`.
+ /// If `mode` is `Some`, store the file entry with that mode.
+ pub fn put_object
(&mut self, name: &str, from: &mut T, mode: Option) -> Result<()>
+ where
+ T: Read,
+ {
+ // We're going to declare the compression method as "stored",
+ // but we're actually going to store zstd-compressed blobs.
+ let opts = FileOptions::default().compression_method(CompressionMethod::Stored);
+ let opts = if let Some(mode) = mode {
+ opts.unix_permissions(mode)
+ } else {
+ opts
+ };
+ self.zip
+ .start_file(name, opts)
+ .context("Failed to start cache entry object")?;
+
+ let compression_level = std::env::var("SCCACHE_CACHE_ZSTD_LEVEL")
+ .ok()
+ .and_then(|value| value.parse::().ok())
+ .unwrap_or(3);
+ zstd::stream::copy_encode(from, &mut self.zip, compression_level)?;
+ Ok(())
+ }
+
+ pub fn put_stdout(&mut self, bytes: &[u8]) -> Result<()> {
+ self.put_bytes("stdout", bytes)
+ }
+
+ pub fn put_stderr(&mut self, bytes: &[u8]) -> Result<()> {
+ self.put_bytes("stderr", bytes)
+ }
+
+ fn put_bytes(&mut self, name: &str, bytes: &[u8]) -> Result<()> {
+ if !bytes.is_empty() {
+ let mut cursor = Cursor::new(bytes);
+ return self.put_object(name, &mut cursor, None);
+ }
+ Ok(())
+ }
+
+ /// Finish writing data to the cache entry writer, and return the data.
+ pub fn finish(self) -> Result> {
+ let CacheWrite { mut zip } = self;
+ let cur = zip.finish().context("Failed to finish cache entry zip")?;
+ Ok(cur.into_inner())
+ }
+}
+
+impl Default for CacheWrite {
+ fn default() -> Self {
+ Self::new()
+ }
+}
diff --git a/src/cache/disk.rs b/src/cache/disk.rs
index c4f3491e9..f8707f9c0 100644
--- a/src/cache/disk.rs
+++ b/src/cache/disk.rs
@@ -12,67 +12,23 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+use super::lazy_disk_cache::LazyDiskCache;
use crate::cache::{Cache, CacheMode, CacheRead, CacheWrite, Storage};
-use crate::compiler::PreprocessorCacheEntry;
-use crate::lru_disk_cache::LruDiskCache;
-use crate::lru_disk_cache::{Error as LruError, ReadSeek};
+use crate::errors::*;
+use crate::lru_disk_cache::Error as LruError;
use async_trait::async_trait;
-use std::ffi::{OsStr, OsString};
-use std::io::{BufWriter, Write};
+use std::ffi::OsStr;
+use std::io::Write;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};
-use crate::errors::*;
-
-use super::{PreprocessorCacheModeConfig, normalize_key};
-
-enum LazyDiskCache {
- Uninit { root: OsString, max_size: u64 },
- Init(LruDiskCache),
-}
-
-impl LazyDiskCache {
- fn get_or_init(&mut self) -> Result<&mut LruDiskCache> {
- match self {
- LazyDiskCache::Uninit { root, max_size } => {
- *self = LazyDiskCache::Init(LruDiskCache::new(&root, *max_size)?);
- self.get_or_init()
- }
- LazyDiskCache::Init(d) => Ok(d),
- }
- }
-
- fn get(&mut self) -> Option<&mut LruDiskCache> {
- match self {
- LazyDiskCache::Uninit { .. } => None,
- LazyDiskCache::Init(d) => Some(d),
- }
- }
-
- fn capacity(&self) -> u64 {
- match self {
- LazyDiskCache::Uninit { max_size, .. } => *max_size,
- LazyDiskCache::Init(d) => d.capacity(),
- }
- }
-
- fn path(&self) -> &Path {
- match self {
- LazyDiskCache::Uninit { root, .. } => root.as_ref(),
- LazyDiskCache::Init(d) => d.path(),
- }
- }
-}
-
/// A cache that stores entries at local disk paths.
pub struct DiskCache {
/// `LruDiskCache` does all the real work here.
lru: Arc>,
/// Thread pool to execute disk I/O
pool: tokio::runtime::Handle,
- preprocessor_cache_mode_config: PreprocessorCacheModeConfig,
- preprocessor_cache: Arc>,
rw_mode: CacheMode,
}
@@ -82,7 +38,6 @@ impl DiskCache {
root: T,
max_size: u64,
pool: &tokio::runtime::Handle,
- preprocessor_cache_mode_config: PreprocessorCacheModeConfig,
rw_mode: CacheMode,
) -> DiskCache {
DiskCache {
@@ -91,13 +46,6 @@ impl DiskCache {
max_size,
})),
pool: pool.clone(),
- preprocessor_cache_mode_config,
- preprocessor_cache: Arc::new(Mutex::new(LazyDiskCache::Uninit {
- root: Path::new(root.as_ref())
- .join("preprocessor")
- .into_os_string(),
- max_size,
- })),
rw_mode,
}
}
@@ -178,42 +126,4 @@ impl Storage for DiskCache {
async fn max_size(&self) -> Result> {
Ok(Some(self.lru.lock().unwrap().capacity()))
}
- fn preprocessor_cache_mode_config(&self) -> PreprocessorCacheModeConfig {
- self.preprocessor_cache_mode_config
- }
- async fn get_preprocessor_cache_entry(&self, key: &str) -> Result >> {
- let key = normalize_key(key);
- Ok(self
- .preprocessor_cache
- .lock()
- .unwrap()
- .get_or_init()?
- .get(key)
- .ok())
- }
- async fn put_preprocessor_cache_entry(
- &self,
- key: &str,
- preprocessor_cache_entry: PreprocessorCacheEntry,
- ) -> Result<()> {
- if self.rw_mode == CacheMode::ReadOnly {
- return Err(anyhow!("Cannot write to a read-only cache"));
- }
-
- let key = normalize_key(key);
- let mut f = self
- .preprocessor_cache
- .lock()
- .unwrap()
- .get_or_init()?
- .prepare_add(key, 0)?;
- preprocessor_cache_entry.serialize_to(BufWriter::new(f.as_file_mut()))?;
- Ok(self
- .preprocessor_cache
- .lock()
- .unwrap()
- .get()
- .unwrap()
- .commit(f)?)
- }
}
diff --git a/src/cache/lazy_disk_cache.rs b/src/cache/lazy_disk_cache.rs
new file mode 100644
index 000000000..0963522f6
--- /dev/null
+++ b/src/cache/lazy_disk_cache.rs
@@ -0,0 +1,54 @@
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+use crate::errors::*;
+use crate::lru_disk_cache::LruDiskCache;
+use std::ffi::OsString;
+use std::path::Path;
+
+pub enum LazyDiskCache {
+ Uninit { root: OsString, max_size: u64 },
+ Init(LruDiskCache),
+}
+
+impl LazyDiskCache {
+ pub fn get_or_init(&mut self) -> Result<&mut LruDiskCache> {
+ match self {
+ LazyDiskCache::Uninit { root, max_size } => {
+ *self = LazyDiskCache::Init(LruDiskCache::new(&root, *max_size)?);
+ self.get_or_init()
+ }
+ LazyDiskCache::Init(d) => Ok(d),
+ }
+ }
+
+ pub fn get(&mut self) -> Option<&mut LruDiskCache> {
+ match self {
+ LazyDiskCache::Uninit { .. } => None,
+ LazyDiskCache::Init(d) => Some(d),
+ }
+ }
+
+ pub fn capacity(&self) -> u64 {
+ match self {
+ LazyDiskCache::Uninit { max_size, .. } => *max_size,
+ LazyDiskCache::Init(d) => d.capacity(),
+ }
+ }
+
+ pub fn path(&self) -> &Path {
+ match self {
+ LazyDiskCache::Uninit { root, .. } => root.as_ref(),
+ LazyDiskCache::Init(d) => d.path(),
+ }
+ }
+}
diff --git a/src/cache/mod.rs b/src/cache/mod.rs
index 744499414..f4e74c764 100644
--- a/src/cache/mod.rs
+++ b/src/cache/mod.rs
@@ -16,6 +16,7 @@
pub mod azure;
#[allow(clippy::module_inception)]
pub mod cache;
+pub mod cache_io;
#[cfg(feature = "cos")]
pub mod cos;
pub mod disk;
@@ -23,15 +24,20 @@ pub mod disk;
pub mod gcs;
#[cfg(feature = "gha")]
pub mod gha;
+#[allow(clippy::module_inception)]
+pub mod lazy_disk_cache;
#[cfg(feature = "memcached")]
pub mod memcached;
#[cfg(feature = "oss")]
pub mod oss;
+pub mod preprocessor_cache;
pub mod readonly;
#[cfg(feature = "redis")]
pub mod redis;
#[cfg(feature = "s3")]
pub mod s3;
+pub mod storage;
+pub(crate) mod utils;
#[cfg(feature = "webdav")]
pub mod webdav;
@@ -47,3 +53,7 @@ pub mod webdav;
pub(crate) mod http_client;
pub use crate::cache::cache::*;
+pub use crate::cache::cache_io::*;
+pub use crate::cache::lazy_disk_cache::*;
+pub use crate::cache::preprocessor_cache::*;
+pub use crate::cache::storage::*;
diff --git a/src/cache/preprocessor_cache.rs b/src/cache/preprocessor_cache.rs
new file mode 100644
index 000000000..4c2150d04
--- /dev/null
+++ b/src/cache/preprocessor_cache.rs
@@ -0,0 +1,130 @@
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+use std::{
+ io::BufWriter,
+ sync::{Arc, Mutex},
+};
+
+use crate::{
+ cache::{LazyDiskCache, utils::normalize_key},
+ compiler::PreprocessorCacheEntry,
+ config::{CacheModeConfig, PreprocessorCacheModeConfig},
+ errors::*,
+};
+use async_trait::async_trait;
+
+#[async_trait]
+pub trait PreprocessorCacheStorage: Send + Sync {
+ /// Return the config for preprocessor cache mode if applicable
+ fn get_config(&self) -> &PreprocessorCacheModeConfig;
+
+ /// Return the preprocessor cache entry for a given preprocessor key,
+ /// if it exists.
+ /// Only applicable when using preprocessor cache mode.
+ async fn get_preprocessor_cache_entry(
+ &self,
+ _key: &str,
+ ) -> Result >> {
+ Ok(None)
+ }
+
+ /// Insert a preprocessor cache entry at the given preprocessor key,
+ /// overwriting the entry if it exists.
+ /// Only applicable when using preprocessor cache mode.
+ async fn put_preprocessor_cache_entry(
+ &self,
+ _key: &str,
+ _preprocessor_cache_entry: PreprocessorCacheEntry,
+ ) -> Result<()> {
+ Ok(())
+ }
+}
+
+/// Store the hashed source file as preprocessor cache entries.
+/// If preprocessor cache mode is enabled,
+pub(crate) struct PreprocessorCache {
+ cache: Option>>,
+ config: PreprocessorCacheModeConfig,
+}
+
+impl PreprocessorCache {
+ pub fn new(config: &PreprocessorCacheModeConfig) -> PreprocessorCache {
+ debug!("Creating PreprocessorCache with config: {:?}", config);
+ PreprocessorCache {
+ cache: if config.use_preprocessor_cache_mode {
+ assert!(
+ config.dir.is_some(),
+ "Preprocessor cache dir must be set when using preprocessor cache mode"
+ );
+ let config_dir = config.dir.as_ref().unwrap().clone();
+ debug!("Using preprocessor cache dir: {:?}", config_dir);
+ Some(Arc::new(Mutex::new(LazyDiskCache::Uninit {
+ root: config_dir.into_os_string(),
+ max_size: config.max_size,
+ })))
+ } else {
+ None
+ },
+ config: config.clone(),
+ }
+ }
+}
+
+#[async_trait]
+impl PreprocessorCacheStorage for PreprocessorCache {
+ /// Return the config for preprocessor cache mode if applicable
+ fn get_config(&self) -> &PreprocessorCacheModeConfig {
+ &self.config
+ }
+
+ /// Return the preprocessor cache entry for a given preprocessor key,
+ /// if it exists.
+ /// Only applicable when using preprocessor cache mode.
+ async fn get_preprocessor_cache_entry(
+ &self,
+ key: &str,
+ ) -> Result>> {
+ match self.cache {
+ None => Ok(None),
+ Some(ref cache) => {
+ assert!(self.config.use_preprocessor_cache_mode);
+ let key = normalize_key(key);
+ Ok(cache.lock().unwrap().get_or_init()?.get(key).ok())
+ }
+ }
+ }
+
+ /// Insert a preprocessor cache entry at the given preprocessor key,
+ /// overwriting the entry if it exists.
+ /// Only applicable when using preprocessor cache mode.
+ async fn put_preprocessor_cache_entry(
+ &self,
+ key: &str,
+ preprocessor_cache_entry: PreprocessorCacheEntry,
+ ) -> Result<()> {
+ if self.config.rw_mode == CacheModeConfig::ReadOnly {
+ bail!("Cannot write to a read-only cache");
+ }
+ match self.cache {
+ None => Ok(()),
+ Some(ref cache) => {
+ assert!(self.config.use_preprocessor_cache_mode);
+ let key = normalize_key(key);
+ info!("PreprocessorCache: put_preprocessor_cache_entry({})", key);
+ let mut f = cache.lock().unwrap().get_or_init()?.prepare_add(key, 0)?;
+ preprocessor_cache_entry.serialize_to(BufWriter::new(f.as_file_mut()))?;
+ Ok(cache.lock().unwrap().get().unwrap().commit(f)?)
+ }
+ }
+ }
+}
diff --git a/src/cache/readonly.rs b/src/cache/readonly.rs
index 90431c4fb..1a99fee86 100644
--- a/src/cache/readonly.rs
+++ b/src/cache/readonly.rs
@@ -16,11 +16,8 @@ use std::time::Duration;
use async_trait::async_trait;
use crate::cache::{Cache, CacheMode, CacheWrite, Storage};
-use crate::compiler::PreprocessorCacheEntry;
use crate::errors::*;
-use super::PreprocessorCacheModeConfig;
-
pub struct ReadOnlyStorage(pub Arc);
#[async_trait]
@@ -58,32 +55,6 @@ impl Storage for ReadOnlyStorage {
async fn max_size(&self) -> Result> {
self.0.max_size().await
}
-
- /// Return the config for preprocessor cache mode if applicable
- fn preprocessor_cache_mode_config(&self) -> PreprocessorCacheModeConfig {
- self.0.preprocessor_cache_mode_config()
- }
-
- /// Return the preprocessor cache entry for a given preprocessor key,
- /// if it exists.
- /// Only applicable when using preprocessor cache mode.
- async fn get_preprocessor_cache_entry(
- &self,
- key: &str,
- ) -> Result >> {
- self.0.get_preprocessor_cache_entry(key).await
- }
-
- /// Insert a preprocessor cache entry at the given preprocessor key,
- /// overwriting the entry if it exists.
- /// Only applicable when using preprocessor cache mode.
- async fn put_preprocessor_cache_entry(
- &self,
- _key: &str,
- _preprocessor_cache_entry: PreprocessorCacheEntry,
- ) -> Result<()> {
- Err(anyhow!("Cannot write to read-only storage"))
- }
}
#[cfg(test)]
@@ -95,32 +66,13 @@ mod test {
#[test]
fn readonly_storage_is_readonly() {
- let storage = ReadOnlyStorage(Arc::new(MockStorage::new(None, false)));
+ let storage = ReadOnlyStorage(Arc::new(MockStorage::new(None)));
assert_eq!(
storage.check().now_or_never().unwrap().unwrap(),
CacheMode::ReadOnly
);
}
- #[test]
- fn readonly_storage_forwards_preprocessor_cache_mode_config() {
- let storage_no_preprocessor_cache =
- ReadOnlyStorage(Arc::new(MockStorage::new(None, false)));
- assert!(
- !storage_no_preprocessor_cache
- .preprocessor_cache_mode_config()
- .use_preprocessor_cache_mode
- );
-
- let storage_with_preprocessor_cache =
- ReadOnlyStorage(Arc::new(MockStorage::new(None, true)));
- assert!(
- storage_with_preprocessor_cache
- .preprocessor_cache_mode_config()
- .use_preprocessor_cache_mode
- );
- }
-
#[test]
fn readonly_storage_put_err() {
let runtime = tokio::runtime::Builder::new_current_thread()
@@ -129,7 +81,7 @@ mod test {
.build()
.unwrap();
- let storage = ReadOnlyStorage(Arc::new(MockStorage::new(None, true)));
+ let storage = ReadOnlyStorage(Arc::new(MockStorage::new(None)));
runtime.block_on(async move {
assert_eq!(
storage
@@ -139,14 +91,6 @@ mod test {
.to_string(),
"Cannot write to read-only storage"
);
- assert_eq!(
- storage
- .put_preprocessor_cache_entry("test1", PreprocessorCacheEntry::default())
- .await
- .unwrap_err()
- .to_string(),
- "Cannot write to read-only storage"
- );
});
}
}
diff --git a/src/cache/storage.rs b/src/cache/storage.rs
new file mode 100644
index 000000000..86a2b8fab
--- /dev/null
+++ b/src/cache/storage.rs
@@ -0,0 +1,173 @@
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+use super::cache_io::{Cache, CacheMode, CacheRead, CacheWrite};
+#[cfg(any(
+ feature = "azure",
+ feature = "gcs",
+ feature = "gha",
+ feature = "memcached",
+ feature = "redis",
+ feature = "s3",
+ feature = "webdav",
+ feature = "oss",
+ feature = "cos"
+))]
+use crate::cache::utils::normalize_key;
+use crate::errors::*;
+use async_trait::async_trait;
+use std::time::Duration;
+
+/// An interface to cache storage.
+#[async_trait]
+pub trait Storage: Send + Sync {
+ /// Get a cache entry by `key`.
+ ///
+ /// If an error occurs, this method should return a `Cache::Error`.
+ /// If nothing fails but the entry is not found in the cache,
+ /// it should return a `Cache::Miss`.
+ /// If the entry is successfully found in the cache, it should
+ /// return a `Cache::Hit`.
+ async fn get(&self, key: &str) -> Result;
+
+ /// Put `entry` in the cache under `key`.
+ ///
+ /// Returns a `Future` that will provide the result or error when the put is
+ /// finished.
+ async fn put(&self, key: &str, entry: CacheWrite) -> Result;
+
+ /// Check the cache capability.
+ ///
+ /// - `Ok(CacheMode::ReadOnly)` means cache can only be used to `get`
+ /// cache.
+ /// - `Ok(CacheMode::ReadWrite)` means cache can do both `get` and `put`.
+ /// - `Err(err)` means cache is not setup correctly or not match with
+ /// users input (for example, user try to use `ReadWrite` but cache
+ /// is `ReadOnly`).
+ ///
+ /// We will provide a default implementation which returns
+ /// `Ok(CacheMode::ReadWrite)` for service that doesn't
+ /// support check yet.
+ async fn check(&self) -> Result {
+ Ok(CacheMode::ReadWrite)
+ }
+
+ /// Get the storage location.
+ fn location(&self) -> String;
+
+ /// Get the current storage usage, if applicable.
+ async fn current_size(&self) -> Result>;
+
+ /// Get the maximum storage size, if applicable.
+ async fn max_size(&self) -> Result >;
+}
+
+/// Implement storage for operator.
+#[cfg(any(
+ feature = "azure",
+ feature = "gcs",
+ feature = "gha",
+ feature = "memcached",
+ feature = "redis",
+ feature = "s3",
+ feature = "webdav",
+ feature = "oss",
+ feature = "cos"
+))]
+#[async_trait]
+impl Storage for opendal::Operator {
+ async fn get(&self, key: &str) -> Result {
+ match self.read(&normalize_key(key)).await {
+ Ok(res) => {
+ let hit = CacheRead::from(std::io::Cursor::new(res.to_bytes()))?;
+ Ok(Cache::Hit(hit))
+ }
+ Err(e) if e.kind() == opendal::ErrorKind::NotFound => Ok(Cache::Miss),
+ Err(e) => {
+ warn!("Got unexpected error: {:?}", e);
+ Ok(Cache::Miss)
+ }
+ }
+ }
+
+ async fn put(&self, key: &str, entry: CacheWrite) -> Result {
+ let start = std::time::Instant::now();
+
+ self.write(&normalize_key(key), entry.finish()?).await?;
+
+ Ok(start.elapsed())
+ }
+
+ async fn check(&self) -> Result {
+ use opendal::ErrorKind;
+
+ let path = ".sccache_check";
+
+ // Read is required, return error directly if we can't read .
+ match self.read(path).await {
+ Ok(_) => (),
+ // Read not exist file with not found is ok.
+ Err(err) if err.kind() == ErrorKind::NotFound => (),
+ // Tricky Part.
+ //
+ // We tolerate rate limited here to make sccache keep running.
+ // For the worse case, we will miss all the cache.
+ //
+ // In some super rare cases, user could configure storage in wrong
+ // and hitting other services rate limit. There are few things we
+ // can do, so we will print our the error here to make users know
+ // about it.
+ Err(err) if err.kind() == ErrorKind::RateLimited => {
+ eprintln!("cache storage read check: {err:?}, but we decide to keep running");
+ }
+ Err(err) => bail!("cache storage failed to read: {:?}", err),
+ }
+
+ let can_write = match self.write(path, "Hello, World!").await {
+ Ok(_) => true,
+ Err(err) if err.kind() == ErrorKind::AlreadyExists => true,
+ // Tolerate all other write errors because we can do read at least.
+ Err(err) => {
+ eprintln!("storage write check failed: {err:?}");
+ false
+ }
+ };
+
+ let mode = if can_write {
+ CacheMode::ReadWrite
+ } else {
+ CacheMode::ReadOnly
+ };
+
+ debug!("storage check result: {mode:?}");
+
+ Ok(mode)
+ }
+
+ fn location(&self) -> String {
+ let meta = self.info();
+ format!(
+ "{}, name: {}, prefix: {}",
+ meta.scheme(),
+ meta.name(),
+ meta.root()
+ )
+ }
+
+ async fn current_size(&self) -> Result> {
+ Ok(None)
+ }
+
+ async fn max_size(&self) -> Result > {
+ Ok(None)
+ }
+}
diff --git a/src/cache/utils.rs b/src/cache/utils.rs
new file mode 100644
index 000000000..ee1186fd4
--- /dev/null
+++ b/src/cache/utils.rs
@@ -0,0 +1,62 @@
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+use fs_err as fs;
+
+use std::path::Path;
+
+use crate::errors::*;
+
+/// Normalize key `abcdef` into `a/b/c/abcdef`
+pub(in crate::cache) fn normalize_key(key: &str) -> String {
+ format!("{}/{}/{}/{}", &key[0..1], &key[1..2], &key[2..3], &key)
+}
+
+#[cfg(unix)]
+pub(in crate::cache) fn get_file_mode(file: &fs::File) -> Result > {
+ use std::os::unix::fs::MetadataExt;
+ Ok(Some(file.metadata()?.mode()))
+}
+
+#[cfg(windows)]
+#[allow(clippy::unnecessary_wraps)]
+pub(in crate::cache) fn get_file_mode(_file: &fs::File) -> Result > {
+ Ok(None)
+}
+
+#[cfg(unix)]
+pub(in crate::cache) fn set_file_mode(path: &Path, mode: u32) -> Result<()> {
+ use std::fs::Permissions;
+ use std::os::unix::fs::PermissionsExt;
+ let p = Permissions::from_mode(mode);
+ fs::set_permissions(path, p)?;
+ Ok(())
+}
+
+#[cfg(windows)]
+#[allow(clippy::unnecessary_wraps)]
+pub(in crate::cache) fn set_file_mode(_path: &Path, _mode: u32) -> Result<()> {
+ Ok(())
+}
+
+#[cfg(test)]
+mod test {
+ use super::*;
+
+ #[test]
+ fn test_normalize_key() {
+ assert_eq!(
+ normalize_key("0123456789abcdef0123456789abcdef"),
+ "0/1/2/0123456789abcdef0123456789abcdef"
+ );
+ }
+}
diff --git a/src/commands.rs b/src/commands.rs
index 9d7a4ed57..6efa994cc 100644
--- a/src/commands.rs
+++ b/src/commands.rs
@@ -12,11 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use crate::cache::storage_from_config;
+use crate::cache::get_storage_from_config;
use crate::client::{ServerConnection, connect_to_server, connect_with_retry};
use crate::cmdline::{Command, StatsFormat};
use crate::compiler::ColorMode;
-use crate::config::{Config, default_disk_cache_dir};
+use crate::config::Config;
use crate::jobserver::Client;
use crate::mock_command::{CommandChild, CommandCreatorSync, ProcessCommandCreator, RunCommand};
use crate::protocol::{Compile, CompileFinished, CompileResponse, Request, Response};
@@ -628,8 +628,14 @@ pub fn run_command(cmd: Command) -> Result {
// anyways, so we can just return (mostly) empty stats directly.
Err(_) => {
let runtime = Runtime::new()?;
- let storage = storage_from_config(config, runtime.handle()).ok();
- runtime.block_on(ServerInfo::new(ServerStats::default(), storage.as_deref()))?
+ let storage_data = get_storage_from_config(config, runtime.handle()).ok();
+ let storage = storage_data.as_ref().map(|(s, _)| s.clone());
+ let preprocessor_cache_storage = storage_data.as_ref().map(|(_, p)| p.clone());
+ runtime.block_on(ServerInfo::new(
+ ServerStats::default(),
+ storage.as_deref(),
+ preprocessor_cache_storage.as_deref(),
+ ))?
}
};
match fmt {
@@ -639,7 +645,11 @@ pub fn run_command(cmd: Command) -> Result {
}
Command::DebugPreprocessorCacheEntries => {
trace!("Command::DebugPreprocessorCacheEntries");
- let entries_dir = default_disk_cache_dir().join("preprocessor");
+ let entries_dir = config
+ .preprocessor_cache
+ .dir
+ .as_ref()
+ .expect("Preprocessor cache directory must be configured internally.");
for entry in WalkDir::new(entries_dir).sort_by_file_name().into_iter() {
let preprocessor_cache_entry_file = entry?;
let path = preprocessor_cache_entry_file.path();
diff --git a/src/compiler/c.rs b/src/compiler/c.rs
index 23e36b50c..89faa8233 100644
--- a/src/compiler/c.rs
+++ b/src/compiler/c.rs
@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use crate::cache::{FileObjectSource, PreprocessorCacheModeConfig, Storage};
+use crate::cache::{FileObjectSource, PreprocessorCacheStorage, Storage};
use crate::compiler::preprocessor_cache::preprocessor_cache_entry_hash_key;
use crate::compiler::{
Cacheable, ColorMode, Compilation, CompileCommand, Compiler, CompilerArguments, CompilerHasher,
@@ -20,6 +20,7 @@ use crate::compiler::{
};
#[cfg(feature = "dist-client")]
use crate::compiler::{DistPackagers, NoopOutputsRewriter};
+use crate::config::PreprocessorCacheModeConfig;
use crate::dist;
#[cfg(feature = "dist-client")]
use crate::dist::pkg;
@@ -372,7 +373,8 @@ where
may_dist: bool,
pool: &tokio::runtime::Handle,
rewrite_includes_only: bool,
- storage: Arc,
+ _storage: Arc,
+ preprocessor_cache_storage: Arc,
cache_control: CacheControl,
) -> Result> {
let start_of_compilation = std::time::SystemTime::now();
@@ -393,7 +395,7 @@ where
// Try to look for a cached preprocessing step for this compilation
// request.
- let preprocessor_cache_mode_config = storage.preprocessor_cache_mode_config();
+ let preprocessor_cache_mode_config = preprocessor_cache_storage.get_config();
let too_hard_for_preprocessor_cache_mode = self
.parsed_args
.too_hard_for_preprocessor_cache_mode
@@ -415,6 +417,7 @@ where
let mut use_preprocessor_cache_mode = can_use_preprocessor_cache_mode;
// Allow overrides from the env
+ // FIXME: Is this needed when the environmental config is merged?
for (key, val) in env_vars.iter() {
if key == "SCCACHE_DIRECT" {
if let Some(val) = val.to_str() {
@@ -460,7 +463,7 @@ where
let (preprocessor_output, include_files) = if needs_preprocessing {
if let Some(preprocessor_key) = &preprocessor_key {
if cache_control == CacheControl::Default {
- if let Some(mut seekable) = storage
+ if let Some(mut seekable) = preprocessor_cache_storage
.get_preprocessor_cache_entry(preprocessor_key)
.await?
{
@@ -479,7 +482,7 @@ where
"Preprocessor cache updated because of time macros: {preprocessor_key}"
);
- if let Err(e) = storage
+ if let Err(e) = preprocessor_cache_storage
.put_preprocessor_cache_entry(
preprocessor_key,
preprocessor_cache_entry,
@@ -646,7 +649,7 @@ where
files.sort_unstable_by(|a, b| a.1.cmp(&b.1));
preprocessor_cache_entry.add_result(start_of_compilation, &key, files);
- if let Err(e) = storage
+ if let Err(e) = preprocessor_cache_storage
.put_preprocessor_cache_entry(&preprocessor_key, preprocessor_cache_entry)
.await
{
@@ -708,7 +711,7 @@ fn process_preprocessed_file(
cwd: &Path,
bytes: &mut [u8],
included_files: &mut HashMap,
- config: PreprocessorCacheModeConfig,
+ config: &PreprocessorCacheModeConfig,
time_of_compilation: std::time::SystemTime,
fs_impl: impl PreprocessorFSAbstraction,
) -> Result {
@@ -827,7 +830,7 @@ fn process_preprocessor_line(
input_file: &Path,
cwd: &Path,
included_files: &mut HashMap,
- config: PreprocessorCacheModeConfig,
+ config: &PreprocessorCacheModeConfig,
time_of_compilation: std::time::SystemTime,
bytes: &mut [u8],
mut start: usize,
@@ -1034,7 +1037,7 @@ fn remember_include_file(
included_files: &mut HashMap,
digest: &mut Digest,
system: bool,
- config: PreprocessorCacheModeConfig,
+ config: &PreprocessorCacheModeConfig,
time_of_compilation: std::time::SystemTime,
fs_impl: &impl PreprocessorFSAbstraction,
) -> Result {
@@ -1755,7 +1758,7 @@ mod test {
Path::new(""),
&mut bytes,
&mut include_files,
- config,
+ &config,
std::time::SystemTime::now(),
StandardFsAbstraction,
)
@@ -1829,7 +1832,7 @@ mod test {
input_file,
Path::new(""),
include_files,
- config,
+ &config,
std::time::SystemTime::now(),
&mut bytes,
0,
diff --git a/src/compiler/clang.rs b/src/compiler/clang.rs
index e3b613b0e..ef021cec3 100644
--- a/src/compiler/clang.rs
+++ b/src/compiler/clang.rs
@@ -265,6 +265,7 @@ mod test {
use crate::compiler::*;
use crate::mock_command::*;
use crate::server;
+ use crate::test::mock_preprocessor_cache::MockPreprocessorCacheStorage;
use crate::test::mock_storage::MockStorage;
use crate::test::utils::*;
use std::collections::HashMap;
@@ -1163,9 +1164,15 @@ mod test {
too_hard_for_preprocessor_cache_mode: None,
};
let runtime = single_threaded_runtime();
- let storage = MockStorage::new(None, false);
+ let storage = MockStorage::new(None);
let storage: std::sync::Arc = std::sync::Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage, runtime.handle().clone());
+ let preprocessor_cache_storage =
+ std::sync::Arc::new(MockPreprocessorCacheStorage::new(false));
+ let service = server::SccacheService::mock_with_storage(
+ storage,
+ preprocessor_cache_storage,
+ runtime.handle().clone(),
+ );
let compiler = &f.bins[0];
// Compiler invocation.
next_command(&creator, Ok(MockChild::new(exit_status(0), "", "")));
diff --git a/src/compiler/compiler.rs b/src/compiler/compiler.rs
index ac876355a..860a64d7b 100644
--- a/src/compiler/compiler.rs
+++ b/src/compiler/compiler.rs
@@ -13,7 +13,9 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use crate::cache::{Cache, CacheWrite, DecompressionFailure, FileObjectSource, Storage};
+use crate::cache::{
+ Cache, CacheWrite, DecompressionFailure, FileObjectSource, PreprocessorCacheStorage, Storage,
+};
use crate::compiler::args::*;
use crate::compiler::c::{CCompiler, CCompilerKind};
use crate::compiler::cicc::Cicc;
@@ -473,6 +475,7 @@ where
pool: &tokio::runtime::Handle,
rewrite_includes_only: bool,
storage: Arc,
+ preprocessor_cache_storage: Arc,
cache_control: CacheControl,
) -> Result>;
@@ -488,6 +491,7 @@ where
dist_client: Option>,
creator: T,
storage: Arc,
+ preprocessor_cache_storage: Arc,
arguments: Vec,
cwd: PathBuf,
env_vars: Vec<(OsString, OsString)>,
@@ -511,6 +515,7 @@ where
&pool,
rewrite_includes_only,
storage.clone(),
+ preprocessor_cache_storage.clone(),
cache_control,
)
.await;
@@ -669,12 +674,14 @@ where
// This compilation only had enough information to find and use a cache entry (or to
// run a local compile, which doesn't need locally preprocessed code).
// For distributed compilation, the local preprocessing step still needs to be done.
+
return self
.get_cached_or_compile(
service,
dist_client,
creator,
storage,
+ preprocessor_cache_storage,
arguments,
cwd,
env_vars,
@@ -1855,8 +1862,10 @@ where
mod test {
use super::*;
use crate::cache::disk::DiskCache;
- use crate::cache::{CacheMode, CacheRead, PreprocessorCacheModeConfig};
+ use crate::cache::{CacheMode, CacheRead, PreprocessorCache};
+ use crate::config::PreprocessorCacheModeConfig;
use crate::mock_command::*;
+ use crate::test::mock_preprocessor_cache::MockPreprocessorCacheStorage;
use crate::test::mock_storage::MockStorage;
use crate::test::utils::*;
use fs::File;
@@ -2266,7 +2275,8 @@ LLVM version: 6.0",
false,
pool,
false,
- Arc::new(MockStorage::new(None, preprocessor_cache_mode)),
+ Arc::new(MockStorage::new(None)),
+ Arc::new(MockPreprocessorCacheStorage::new(preprocessor_cache_mode)),
CacheControl::Default,
)
.wait()
@@ -2334,7 +2344,8 @@ LLVM version: 6.0",
false,
pool,
false,
- Arc::new(MockStorage::new(None, preprocessor_cache_mode)),
+ Arc::new(MockStorage::new(None)),
+ Arc::new(MockPreprocessorCacheStorage::new(preprocessor_cache_mode)),
CacheControl::Default,
)
.wait()
@@ -2400,7 +2411,8 @@ LLVM version: 6.0",
false,
pool,
false,
- Arc::new(MockStorage::new(None, preprocessor_cache_mode)),
+ Arc::new(MockStorage::new(None)),
+ Arc::new(MockPreprocessorCacheStorage::new(preprocessor_cache_mode)),
CacheControl::Default,
)
.wait()
@@ -2444,16 +2456,28 @@ LLVM version: 6.0",
f.tempdir.path().join("cache"),
u64::MAX,
&pool,
- PreprocessorCacheModeConfig {
- use_preprocessor_cache_mode: preprocessor_cache_mode,
- ..Default::default()
- },
CacheMode::ReadWrite,
);
+ let preprocessor_cache_storage = if preprocessor_cache_mode {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: true,
+ dir: Some(f.tempdir.path().join("preprocessor")),
+ ..Default::default()
+ }))
+ } else {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: false,
+ ..Default::default()
+ }))
+ };
// Write a dummy input file so the preprocessor cache mode can work
std::fs::write(f.tempdir.path().join("foo.c"), "whatever").unwrap();
let storage = Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage.clone(), pool.clone());
+ let service = server::SccacheService::mock_with_storage(
+ storage.clone(),
+ preprocessor_cache_storage.clone(),
+ pool.clone(),
+ );
// Pretend to be GCC.
next_command(
@@ -2506,6 +2530,7 @@ LLVM version: 6.0",
None,
creator.clone(),
storage.clone(),
+ preprocessor_cache_storage.clone(),
arguments.clone(),
cwd.to_path_buf(),
vec![],
@@ -2543,6 +2568,7 @@ LLVM version: 6.0",
None,
creator,
storage,
+ preprocessor_cache_storage,
arguments,
cwd.to_path_buf(),
vec![],
@@ -2574,12 +2600,20 @@ LLVM version: 6.0",
f.tempdir.path().join("cache"),
u64::MAX,
&pool,
- PreprocessorCacheModeConfig {
- use_preprocessor_cache_mode: preprocessor_cache_mode,
- ..Default::default()
- },
CacheMode::ReadWrite,
);
+ let preprocessor_cache_storage = if preprocessor_cache_mode {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: true,
+ dir: Some(f.tempdir.path().join("preprocessor")),
+ ..Default::default()
+ }))
+ } else {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: false,
+ ..Default::default()
+ }))
+ };
// Write a dummy input file so the preprocessor cache mode can work
std::fs::write(f.tempdir.path().join("foo.c"), "whatever").unwrap();
let storage = Arc::new(storage);
@@ -2618,6 +2652,7 @@ LLVM version: 6.0",
let service = server::SccacheService::mock_with_dist_client(
dist_client.clone(),
storage.clone(),
+ preprocessor_cache_storage.clone(),
pool.clone(),
);
@@ -2635,6 +2670,7 @@ LLVM version: 6.0",
Some(dist_client.clone()),
creator.clone(),
storage.clone(),
+ preprocessor_cache_storage.clone(),
arguments.clone(),
cwd.to_path_buf(),
vec![],
@@ -2672,6 +2708,7 @@ LLVM version: 6.0",
Some(dist_client.clone()),
creator,
storage,
+ preprocessor_cache_storage,
arguments,
cwd.to_path_buf(),
vec![],
@@ -2700,9 +2737,15 @@ LLVM version: 6.0",
let gcc = f.mk_bin("gcc").unwrap();
let runtime = Runtime::new().unwrap();
let pool = runtime.handle().clone();
- let storage = MockStorage::new(None, preprocessor_cache_mode);
+ let storage = MockStorage::new(None);
let storage: Arc = Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage.clone(), pool.clone());
+ let preprocessor_cache_storage =
+ Arc::new(MockPreprocessorCacheStorage::new(preprocessor_cache_mode));
+ let service = server::SccacheService::mock_with_storage(
+ storage.clone(),
+ preprocessor_cache_storage.clone(),
+ pool.clone(),
+ );
// Write a dummy input file so the preprocessor cache mode can work
std::fs::write(f.tempdir.path().join("foo.c"), "whatever").unwrap();
@@ -2757,6 +2800,7 @@ LLVM version: 6.0",
None,
creator,
storage,
+ preprocessor_cache_storage,
arguments.clone(),
cwd.to_path_buf(),
vec![],
@@ -2793,9 +2837,15 @@ LLVM version: 6.0",
std::fs::write(f.tempdir.path().join("foo.c"), "whatever").unwrap();
// Make our storage wait 2ms for each get/put operation.
let storage_delay = Duration::from_millis(2);
- let storage = MockStorage::new(Some(storage_delay), preprocessor_cache_mode);
+ let storage = MockStorage::new(Some(storage_delay));
let storage: Arc = Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage.clone(), pool.clone());
+ let preprocessor_cache_storage =
+ Arc::new(MockPreprocessorCacheStorage::new(preprocessor_cache_mode));
+ let service = server::SccacheService::mock_with_storage(
+ storage.clone(),
+ preprocessor_cache_storage.clone(),
+ pool.clone(),
+ );
// Pretend to be GCC.
next_command(
&creator,
@@ -2849,6 +2899,7 @@ LLVM version: 6.0",
None,
creator,
storage,
+ preprocessor_cache_storage,
arguments.clone(),
cwd.to_path_buf(),
vec![],
@@ -2877,14 +2928,26 @@ LLVM version: 6.0",
f.tempdir.path().join("cache"),
u64::MAX,
&pool,
- PreprocessorCacheModeConfig {
- use_preprocessor_cache_mode: preprocessor_cache_mode,
- ..Default::default()
- },
CacheMode::ReadWrite,
);
let storage = Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage.clone(), pool.clone());
+ let preprocessor_cache_storage = if preprocessor_cache_mode {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: true,
+ dir: Some(f.tempdir.path().join("preprocessor")),
+ ..Default::default()
+ }))
+ } else {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: false,
+ ..Default::default()
+ }))
+ };
+ let service = server::SccacheService::mock_with_storage(
+ storage.clone(),
+ preprocessor_cache_storage.clone(),
+ pool.clone(),
+ );
// Write a dummy input file so the preprocessor cache mode can work
std::fs::write(f.tempdir.path().join("foo.c"), "whatever").unwrap();
// Pretend to be GCC.
@@ -2942,6 +3005,7 @@ LLVM version: 6.0",
None,
creator.clone(),
storage.clone(),
+ preprocessor_cache_storage.clone(),
arguments.clone(),
cwd.to_path_buf(),
vec![],
@@ -2971,6 +3035,7 @@ LLVM version: 6.0",
None,
creator,
storage,
+ preprocessor_cache_storage,
arguments,
cwd.to_path_buf(),
vec![],
@@ -3006,14 +3071,26 @@ LLVM version: 6.0",
f.tempdir.path().join("cache"),
u64::MAX,
&pool,
- PreprocessorCacheModeConfig {
- use_preprocessor_cache_mode: preprocessor_cache_mode,
- ..Default::default()
- },
CacheMode::ReadWrite,
);
let storage = Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage.clone(), pool.clone());
+ let preprocessor_cache_storage = if preprocessor_cache_mode {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: true,
+ dir: Some(f.tempdir.path().join("preprocessor")),
+ ..Default::default()
+ }))
+ } else {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: false,
+ ..Default::default()
+ }))
+ };
+ let service = server::SccacheService::mock_with_storage(
+ storage.clone(),
+ preprocessor_cache_storage.clone(),
+ pool.clone(),
+ );
// Pretend to be GCC. Also inject a fake object file that the subsequent
// preprocessor failure should remove.
@@ -3064,6 +3141,7 @@ LLVM version: 6.0",
None,
creator,
storage,
+ preprocessor_cache_storage,
arguments,
cwd.to_path_buf(),
vec![],
@@ -3104,13 +3182,21 @@ LLVM version: 6.0",
f.tempdir.path().join("cache"),
u64::MAX,
&pool,
- PreprocessorCacheModeConfig {
- use_preprocessor_cache_mode: preprocessor_cache_mode,
- ..Default::default()
- },
CacheMode::ReadWrite,
);
let storage = Arc::new(storage);
+ let preprocessor_cache_storage = if preprocessor_cache_mode {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: true,
+ dir: Some(f.tempdir.path().join("preprocessor")),
+ ..Default::default()
+ }))
+ } else {
+ Arc::new(PreprocessorCache::new(&PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: false,
+ ..Default::default()
+ }))
+ };
// Pretend to be GCC.
next_command(
&creator,
@@ -3163,6 +3249,7 @@ LLVM version: 6.0",
let service = server::SccacheService::mock_with_dist_client(
dist_client.clone(),
storage.clone(),
+ preprocessor_cache_storage.clone(),
pool.clone(),
);
@@ -3176,6 +3263,7 @@ LLVM version: 6.0",
Some(dist_client.clone()),
creator.clone(),
storage.clone(),
+ preprocessor_cache_storage.clone(),
arguments.clone(),
cwd.to_path_buf(),
vec![],
diff --git a/src/compiler/diab.rs b/src/compiler/diab.rs
index a11e57860..a34bf6a43 100644
--- a/src/compiler/diab.rs
+++ b/src/compiler/diab.rs
@@ -463,6 +463,7 @@ mod test {
use crate::compiler::*;
use crate::mock_command::*;
use crate::server;
+ use crate::test::mock_preprocessor_cache::MockPreprocessorCacheStorage;
use crate::test::mock_storage::MockStorage;
use crate::test::utils::*;
use fs::File;
@@ -792,9 +793,15 @@ mod test {
too_hard_for_preprocessor_cache_mode: None,
};
let runtime = single_threaded_runtime();
- let storage = MockStorage::new(None, false);
+ let storage = MockStorage::new(None);
let storage: std::sync::Arc = std::sync::Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage, runtime.handle().clone());
+ let preprocessor_cache_storage =
+ std::sync::Arc::new(MockPreprocessorCacheStorage::new(false));
+ let service = server::SccacheService::mock_with_storage(
+ storage,
+ preprocessor_cache_storage,
+ runtime.handle().clone(),
+ );
let compiler = &f.bins[0];
// Compiler invocation.
next_command(&creator, Ok(MockChild::new(exit_status(0), "", "")));
diff --git a/src/compiler/gcc.rs b/src/compiler/gcc.rs
index 32e640ecd..54d6d0088 100644
--- a/src/compiler/gcc.rs
+++ b/src/compiler/gcc.rs
@@ -1037,6 +1037,7 @@ mod test {
use crate::compiler::*;
use crate::mock_command::*;
use crate::server;
+ use crate::test::mock_preprocessor_cache::MockPreprocessorCacheStorage;
use crate::test::mock_storage::MockStorage;
use crate::test::utils::*;
@@ -2275,9 +2276,15 @@ mod test {
too_hard_for_preprocessor_cache_mode: None,
};
let runtime = single_threaded_runtime();
- let storage = MockStorage::new(None, false);
+ let storage = MockStorage::new(None);
let storage: std::sync::Arc = std::sync::Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage, runtime.handle().clone());
+ let preprocessor_cache_storage =
+ std::sync::Arc::new(MockPreprocessorCacheStorage::new(false));
+ let service = server::SccacheService::mock_with_storage(
+ storage,
+ preprocessor_cache_storage,
+ runtime.handle().clone(),
+ );
let compiler = &f.bins[0];
// Compiler invocation.
next_command(&creator, Ok(MockChild::new(exit_status(0), "", "")));
@@ -2336,9 +2343,15 @@ mod test {
too_hard_for_preprocessor_cache_mode: None,
};
let runtime = single_threaded_runtime();
- let storage = MockStorage::new(None, false);
+ let storage = MockStorage::new(None);
let storage: std::sync::Arc = std::sync::Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage, runtime.handle().clone());
+ let preprocessor_cache_storage =
+ std::sync::Arc::new(MockPreprocessorCacheStorage::new(false));
+ let service = server::SccacheService::mock_with_storage(
+ storage,
+ preprocessor_cache_storage,
+ runtime.handle().clone(),
+ );
let compiler = &f.bins[0];
// Compiler invocation.
next_command(&creator, Ok(MockChild::new(exit_status(0), "", "")));
@@ -2395,9 +2408,15 @@ mod test {
too_hard_for_preprocessor_cache_mode: None,
};
let runtime = single_threaded_runtime();
- let storage = MockStorage::new(None, false);
+ let storage = MockStorage::new(None);
let storage: std::sync::Arc = std::sync::Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage, runtime.handle().clone());
+ let preprocessor_cache_storage =
+ std::sync::Arc::new(MockPreprocessorCacheStorage::new(false));
+ let service = server::SccacheService::mock_with_storage(
+ storage,
+ preprocessor_cache_storage,
+ runtime.handle().clone(),
+ );
let compiler = &f.bins[0];
// Compiler invocation.
next_command(&creator, Ok(MockChild::new(exit_status(0), "", "")));
diff --git a/src/compiler/msvc.rs b/src/compiler/msvc.rs
index fdf4e87f5..e67855ee9 100644
--- a/src/compiler/msvc.rs
+++ b/src/compiler/msvc.rs
@@ -1393,6 +1393,7 @@ mod test {
use crate::compiler::*;
use crate::mock_command::*;
use crate::server;
+ use crate::test::mock_preprocessor_cache::MockPreprocessorCacheStorage;
use crate::test::mock_storage::MockStorage;
use crate::test::utils::*;
@@ -2553,9 +2554,15 @@ mod test {
too_hard_for_preprocessor_cache_mode: None,
};
let runtime = single_threaded_runtime();
- let storage = MockStorage::new(None, false);
+ let storage = MockStorage::new(None);
let storage: std::sync::Arc = std::sync::Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage, runtime.handle().clone());
+ let preprocessor_cache_storage =
+ std::sync::Arc::new(MockPreprocessorCacheStorage::new(false));
+ let service = server::SccacheService::mock_with_storage(
+ storage,
+ preprocessor_cache_storage,
+ runtime.handle().clone(),
+ );
let compiler = &f.bins[0];
// Compiler invocation.
next_command(&creator, Ok(MockChild::new(exit_status(0), "", "")));
@@ -2643,9 +2650,15 @@ mod test {
too_hard_for_preprocessor_cache_mode: None,
};
let runtime = single_threaded_runtime();
- let storage = MockStorage::new(None, false);
+ let storage = MockStorage::new(None);
let storage: std::sync::Arc = std::sync::Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage, runtime.handle().clone());
+ let preprocessor_cache_storage =
+ std::sync::Arc::new(MockPreprocessorCacheStorage::new(false));
+ let service = server::SccacheService::mock_with_storage(
+ storage,
+ preprocessor_cache_storage,
+ runtime.handle().clone(),
+ );
let compiler = &f.bins[0];
// Compiler invocation.
next_command(&creator, Ok(MockChild::new(exit_status(0), "", "")));
diff --git a/src/compiler/preprocessor_cache.rs b/src/compiler/preprocessor_cache.rs
index d03cd6e18..bad3e3f20 100644
--- a/src/compiler/preprocessor_cache.rs
+++ b/src/compiler/preprocessor_cache.rs
@@ -33,7 +33,7 @@ use chrono::Datelike;
use serde::{Deserialize, Serialize};
use crate::{
- cache::PreprocessorCacheModeConfig,
+ config::PreprocessorCacheModeConfig,
util::{Digest, HashToDigest, MetadataCtimeExt, Timestamp, encode_path},
};
@@ -176,7 +176,7 @@ impl PreprocessorCacheEntry {
/// are already on disk and have not changed.
pub fn lookup_result_digest(
&mut self,
- config: PreprocessorCacheModeConfig,
+ config: &PreprocessorCacheModeConfig,
updated: &mut bool,
) -> Option {
// Check newest result first since it's more likely to match.
@@ -193,7 +193,7 @@ impl PreprocessorCacheEntry {
fn result_matches(
digest: &str,
includes: &mut [IncludeEntry],
- config: PreprocessorCacheModeConfig,
+ config: &PreprocessorCacheModeConfig,
updated: &mut bool,
) -> bool {
for include in includes {
@@ -380,7 +380,7 @@ pub fn preprocessor_cache_entry_hash_key(
env_vars: &[(OsString, OsString)],
input_file: &Path,
plusplus: bool,
- config: PreprocessorCacheModeConfig,
+ config: &PreprocessorCacheModeConfig,
) -> anyhow::Result> {
// If you change any of the inputs to the hash, you should change `FORMAT_VERSION`.
let mut m = Digest::new();
diff --git a/src/compiler/rust.rs b/src/compiler/rust.rs
index ea42d61c5..81ad52b0a 100644
--- a/src/compiler/rust.rs
+++ b/src/compiler/rust.rs
@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use crate::cache::{FileObjectSource, Storage};
+use crate::cache::{FileObjectSource, PreprocessorCacheStorage, Storage};
use crate::compiler::args::*;
use crate::compiler::{
CCompileCommand, Cacheable, ColorMode, Compilation, CompileCommand, Compiler,
@@ -1334,6 +1334,7 @@ where
pool: &tokio::runtime::Handle,
_rewrite_includes_only: bool,
_storage: Arc,
+ _preprocessor_cache_storage: Arc,
_cache_control: CacheControl,
) -> Result> {
trace!("[{}]: generate_hash_key", self.parsed_args.crate_name);
@@ -2658,6 +2659,7 @@ mod test {
use crate::compiler::*;
use crate::mock_command::*;
+ use crate::test::mock_preprocessor_cache::MockPreprocessorCacheStorage;
use crate::test::mock_storage::MockStorage;
use crate::test::utils::*;
use fs::File;
@@ -3511,7 +3513,8 @@ proc_macro false
false,
&pool,
false,
- Arc::new(MockStorage::new(None, preprocessor_cache_mode)),
+ Arc::new(MockStorage::new(None)),
+ Arc::new(MockPreprocessorCacheStorage::new(preprocessor_cache_mode)),
CacheControl::Default,
)
.wait()
@@ -3603,7 +3606,8 @@ proc_macro false
false,
&pool,
false,
- Arc::new(MockStorage::new(None, preprocessor_cache_mode)),
+ Arc::new(MockStorage::new(None)),
+ Arc::new(MockPreprocessorCacheStorage::new(preprocessor_cache_mode)),
CacheControl::Default,
)
.wait()
diff --git a/src/compiler/tasking_vx.rs b/src/compiler/tasking_vx.rs
index b3fff8238..1def50eb7 100644
--- a/src/compiler/tasking_vx.rs
+++ b/src/compiler/tasking_vx.rs
@@ -406,6 +406,7 @@ mod test {
use crate::compiler::*;
use crate::mock_command::*;
use crate::server;
+ use crate::test::mock_preprocessor_cache::MockPreprocessorCacheStorage;
use crate::test::mock_storage::MockStorage;
use crate::test::utils::*;
@@ -730,9 +731,15 @@ mod test {
too_hard_for_preprocessor_cache_mode: None,
};
let runtime = single_threaded_runtime();
- let storage = MockStorage::new(None, false);
+ let storage = MockStorage::new(None);
let storage: std::sync::Arc = std::sync::Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage, runtime.handle().clone());
+ let preprocessor_cache_storage =
+ std::sync::Arc::new(MockPreprocessorCacheStorage::new(false));
+ let service = server::SccacheService::mock_with_storage(
+ storage,
+ preprocessor_cache_storage,
+ runtime.handle().clone(),
+ );
let compiler = &f.bins[0];
// Compiler invocation.
next_command(&creator, Ok(MockChild::new(exit_status(0), "", "")));
@@ -784,9 +791,15 @@ mod test {
too_hard_for_preprocessor_cache_mode: None,
};
let runtime = single_threaded_runtime();
- let storage = MockStorage::new(None, false);
+ let storage = MockStorage::new(None);
let storage: std::sync::Arc = std::sync::Arc::new(storage);
- let service = server::SccacheService::mock_with_storage(storage, runtime.handle().clone());
+ let preprocessor_cache_storage =
+ std::sync::Arc::new(MockPreprocessorCacheStorage::new(false));
+ let service = server::SccacheService::mock_with_storage(
+ storage,
+ preprocessor_cache_storage,
+ runtime.handle().clone(),
+ );
let compiler = &f.bins[0];
// Compiler invocation.
next_command(&creator, Ok(MockChild::new(exit_status(0), "", "")));
diff --git a/src/config.rs b/src/config.rs
index 3145a238c..00b4b35b8 100644
--- a/src/config.rs
+++ b/src/config.rs
@@ -32,7 +32,6 @@ use std::str::FromStr;
use std::sync::{LazyLock, Mutex};
use std::{collections::HashMap, fmt};
-pub use crate::cache::PreprocessorCacheModeConfig;
use crate::errors::*;
static CACHED_CONFIG_PATH: LazyLock = LazyLock::new(CachedConfig::file_config_path);
@@ -186,6 +185,67 @@ pub struct AzureCacheConfig {
pub key_prefix: String,
}
+/// Configuration switches for preprocessor cache mode.
+#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
+#[serde(deny_unknown_fields)]
+#[serde(default)]
+pub struct PreprocessorCacheModeConfig {
+ /// Whether to use preprocessor cache mode entirely
+ pub use_preprocessor_cache_mode: bool,
+ /// If false (default), only compare header files by hashing their contents.
+ /// If true, will use size + ctime + mtime to check whether a file has changed.
+ /// See other flags below for more control over this behavior.
+ pub file_stat_matches: bool,
+ /// If true (default), uses the ctime (file status change on UNIX,
+ /// creation time on Windows) to check that a file has/hasn't changed.
+ /// Can be useful to disable when backdating modification times
+ /// in a controlled manner.
+ pub use_ctime_for_stat: bool,
+ /// If true, ignore `__DATE__`, `__TIME__` and `__TIMESTAMP__` being present
+ /// in the source code. Will speed up preprocessor cache mode,
+ /// but can result in false positives.
+ pub ignore_time_macros: bool,
+ /// If true, preprocessor cache mode will not cache system headers, only
+ /// add them to the hash.
+ pub skip_system_headers: bool,
+ /// If true (default), will add the current working directory in the hash to
+ /// distinguish two compilations from different directories.
+ pub hash_working_directory: bool,
+ /// Maximum space of the cache
+ #[serde(deserialize_with = "deserialize_size_from_str")]
+ pub max_size: u64,
+ /// Readonly mode for preprocessor cache
+ pub rw_mode: CacheModeConfig,
+ /// Cache directory
+ pub dir: Option,
+}
+
+impl Default for PreprocessorCacheModeConfig {
+ fn default() -> Self {
+ Self {
+ use_preprocessor_cache_mode: false,
+ file_stat_matches: false,
+ use_ctime_for_stat: true,
+ ignore_time_macros: false,
+ skip_system_headers: false,
+ hash_working_directory: true,
+ max_size: default_disk_cache_size(),
+ rw_mode: CacheModeConfig::ReadWrite,
+ dir: None,
+ }
+ }
+}
+
+impl PreprocessorCacheModeConfig {
+ /// Return a default [`Self`], but with the cache active.
+ pub fn activated() -> Self {
+ Self {
+ use_preprocessor_cache_mode: true,
+ ..Default::default()
+ }
+ }
+}
+
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
#[serde(deny_unknown_fields)]
#[serde(default)]
@@ -193,7 +253,6 @@ pub struct DiskCacheConfig {
pub dir: PathBuf,
#[serde(deserialize_with = "deserialize_size_from_str")]
pub size: u64,
- pub preprocessor_cache_mode: PreprocessorCacheModeConfig,
pub rw_mode: CacheModeConfig,
}
@@ -202,7 +261,6 @@ impl Default for DiskCacheConfig {
DiskCacheConfig {
dir: default_disk_cache_dir(),
size: default_disk_cache_size(),
- preprocessor_cache_mode: PreprocessorCacheModeConfig::activated(),
rw_mode: CacheModeConfig::ReadWrite,
}
}
@@ -395,13 +453,19 @@ pub struct CacheConfigs {
pub s3: Option,
pub webdav: Option,
pub oss: Option,
+ pub preprocessor: Option,
pub cos: Option,
}
impl CacheConfigs {
- /// Return cache type in an arbitrary but
- /// consistent ordering
- fn into_fallback(self) -> (Option, DiskCacheConfig) {
+ /// Return cache type in an arbitrary but consistent ordering
+ fn into_fallback(
+ self,
+ ) -> (
+ Option,
+ DiskCacheConfig,
+ PreprocessorCacheModeConfig,
+ ) {
let CacheConfigs {
azure,
disk,
@@ -412,6 +476,7 @@ impl CacheConfigs {
s3,
webdav,
oss,
+ preprocessor,
cos,
} = self;
@@ -427,8 +492,13 @@ impl CacheConfigs {
.or_else(|| cos.map(CacheType::COS));
let fallback = disk.unwrap_or_default();
+ let mut preprocessor_config = preprocessor.unwrap_or_default();
+
+ if preprocessor_config.dir.is_none() {
+ preprocessor_config.dir = Some(fallback.dir.join("preprocessor"));
+ }
- (cache_type, fallback)
+ (cache_type, fallback, preprocessor_config)
}
/// Override self with any existing fields from other
@@ -443,6 +513,7 @@ impl CacheConfigs {
s3,
webdav,
oss,
+ preprocessor,
cos,
} = other;
@@ -476,6 +547,9 @@ impl CacheConfigs {
if cos.is_some() {
self.cos = cos;
}
+ if preprocessor.is_some() {
+ self.preprocessor = preprocessor;
+ }
}
}
@@ -922,20 +996,22 @@ fn config_from_env() -> Result {
None
};
+ // ======= Preprocessor cache =======
+ let preprocessor_mode_config = if let Some(value) = bool_from_env_var("SCCACHE_DIRECT")? {
+ Some(PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: value,
+ ..Default::default()
+ })
+ } else {
+ None
+ };
+
// ======= Local =======
let disk_dir = env::var_os("SCCACHE_DIR").map(PathBuf::from);
let disk_sz = env::var("SCCACHE_CACHE_SIZE")
.ok()
.and_then(|v| parse_size(&v));
- let mut preprocessor_mode_config = PreprocessorCacheModeConfig::activated();
- let preprocessor_mode_overridden = if let Some(value) = bool_from_env_var("SCCACHE_DIRECT")? {
- preprocessor_mode_config.use_preprocessor_cache_mode = value;
- true
- } else {
- false
- };
-
let (disk_rw_mode, disk_rw_mode_overridden) = match env::var("SCCACHE_LOCAL_RW_MODE")
.as_ref()
.map(String::as_str)
@@ -949,15 +1025,11 @@ fn config_from_env() -> Result {
_ => (CacheModeConfig::ReadWrite, false),
};
- let any_overridden = disk_dir.is_some()
- || disk_sz.is_some()
- || preprocessor_mode_overridden
- || disk_rw_mode_overridden;
+ let any_overridden = disk_dir.is_some() || disk_sz.is_some() || disk_rw_mode_overridden;
let disk = if any_overridden {
Some(DiskCacheConfig {
dir: disk_dir.unwrap_or_else(default_disk_cache_dir),
size: disk_sz.unwrap_or_else(default_disk_cache_size),
- preprocessor_cache_mode: preprocessor_mode_config,
rw_mode: disk_rw_mode,
})
} else {
@@ -974,6 +1046,7 @@ fn config_from_env() -> Result {
s3,
webdav,
oss,
+ preprocessor: preprocessor_mode_config,
cos,
};
@@ -1006,6 +1079,7 @@ fn config_file(env_var: &str, leaf: &str) -> PathBuf {
#[derive(Debug, Default, PartialEq, Eq)]
pub struct Config {
pub cache: Option,
+ pub preprocessor_cache: PreprocessorCacheModeConfig,
pub fallback_cache: DiskCacheConfig,
pub dist: DistConfig,
pub server_startup_timeout: Option,
@@ -1039,9 +1113,10 @@ impl Config {
let EnvConfig { cache } = env_conf;
conf_caches.merge(cache);
- let (caches, fallback_cache) = conf_caches.into_fallback();
+ let (caches, fallback_cache, preprocessor_cache) = conf_caches.into_fallback();
Self {
cache: caches,
+ preprocessor_cache,
fallback_cache,
dist,
server_startup_timeout,
@@ -1304,7 +1379,6 @@ fn config_overrides() {
disk: Some(DiskCacheConfig {
dir: "/env-cache".into(),
size: 5,
- preprocessor_cache_mode: Default::default(),
rw_mode: CacheModeConfig::ReadWrite,
}),
redis: Some(RedisCacheConfig {
@@ -1325,7 +1399,6 @@ fn config_overrides() {
disk: Some(DiskCacheConfig {
dir: "/file-cache".into(),
size: 15,
- preprocessor_cache_mode: Default::default(),
rw_mode: CacheModeConfig::ReadWrite,
}),
memcached: Some(MemcachedCacheConfig {
@@ -1358,10 +1431,13 @@ fn config_overrides() {
password: Some("secret".to_owned()),
..Default::default()
}),),
+ preprocessor_cache: PreprocessorCacheModeConfig {
+ dir: Some("/env-cache/preprocessor".into()),
+ ..Default::default()
+ },
fallback_cache: DiskCacheConfig {
dir: "/env-cache".into(),
size: 5,
- preprocessor_cache_mode: Default::default(),
rw_mode: CacheModeConfig::ReadWrite,
},
dist: Default::default(),
@@ -1537,6 +1613,17 @@ token = "secrettoken"
dir = "/tmp/.cache/sccache"
size = 7516192768 # 7 GiBytes
+[cache.preprocessor]
+use_preprocessor_cache_mode = true
+file_stat_matches = true
+use_ctime_for_stat = true
+ignore_time_macros = false
+skip_system_headers = true
+hash_working_directory = true
+max_size = 1028576 # 1 MiBytes
+rw_mode = "READ_WRITE"
+dir = "/tmp/.cache/sccache-preprocessor"
+
[cache.gcs]
rw_mode = "READ_ONLY"
# rw_mode = "READ_WRITE"
@@ -1606,7 +1693,6 @@ key_prefix = "cosprefix"
disk: Some(DiskCacheConfig {
dir: PathBuf::from("/tmp/.cache/sccache"),
size: 7 * 1024 * 1024 * 1024,
- preprocessor_cache_mode: PreprocessorCacheModeConfig::activated(),
rw_mode: CacheModeConfig::ReadWrite,
}),
gcs: Some(GCSCacheConfig {
@@ -1666,6 +1752,17 @@ key_prefix = "cosprefix"
endpoint: Some("cos.na-siliconvalley.myqcloud.com".to_owned()),
key_prefix: "cosprefix".into(),
}),
+ preprocessor: Some(PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: true,
+ file_stat_matches: true,
+ use_ctime_for_stat: true,
+ ignore_time_macros: false,
+ skip_system_headers: true,
+ hash_working_directory: true,
+ max_size: 1028576,
+ rw_mode: CacheModeConfig::ReadWrite,
+ dir: Some("/tmp/.cache/sccache-preprocessor".into()),
+ }),
},
dist: DistConfig {
auth: DistAuth::Token {
diff --git a/src/server.rs b/src/server.rs
index 9f6abff22..33f062486 100644
--- a/src/server.rs
+++ b/src/server.rs
@@ -13,7 +13,7 @@
// limitations under the License.SCCACHE_MAX_FRAME_LENGTH
use crate::cache::readonly::ReadOnlyStorage;
-use crate::cache::{CacheMode, Storage, storage_from_config};
+use crate::cache::{CacheMode, PreprocessorCacheStorage, Storage, get_storage_from_config};
use crate::compiler::{
CacheControl, CompileResult, Compiler, CompilerArguments, CompilerHasher, CompilerKind,
CompilerProxy, DistType, Language, MissType, get_compiler_info,
@@ -449,7 +449,8 @@ pub fn start_server(config: &Config, addr: &crate::net::SocketAddr) -> Result<()
let notify = env::var_os("SCCACHE_STARTUP_NOTIFY");
- let raw_storage = match storage_from_config(config, &pool) {
+ let (raw_storage, raw_preprocessor_cache_storage) = match get_storage_from_config(config, &pool)
+ {
Ok(storage) => storage,
Err(err) => {
error!("storage init failed for: {err:?}");
@@ -494,8 +495,14 @@ pub fn start_server(config: &Config, addr: &crate::net::SocketAddr) -> Result<()
crate::net::SocketAddr::Net(addr) => {
trace!("binding TCP {addr}");
let l = runtime.block_on(tokio::net::TcpListener::bind(addr))?;
- let srv =
- SccacheServer::<_>::with_listener(l, runtime, client, dist_client, storage);
+ let srv = SccacheServer::<_>::with_listener(
+ l,
+ runtime,
+ client,
+ dist_client,
+ storage,
+ raw_preprocessor_cache_storage,
+ );
Ok((
srv.local_addr().unwrap(),
Box::new(move |f| srv.run(f)) as Box _>,
@@ -510,8 +517,14 @@ pub fn start_server(config: &Config, addr: &crate::net::SocketAddr) -> Result<()
let _guard = runtime.enter();
tokio::net::UnixListener::bind(path)?
};
- let srv =
- SccacheServer::<_>::with_listener(l, runtime, client, dist_client, storage);
+ let srv = SccacheServer::<_>::with_listener(
+ l,
+ runtime,
+ client,
+ dist_client,
+ storage,
+ raw_preprocessor_cache_storage,
+ );
Ok((
srv.local_addr().unwrap(),
Box::new(move |f| srv.run(f)) as Box _>,
@@ -527,8 +540,14 @@ pub fn start_server(config: &Config, addr: &crate::net::SocketAddr) -> Result<()
let _guard = runtime.enter();
tokio::net::UnixListener::from_std(l)?
};
- let srv =
- SccacheServer::<_>::with_listener(l, runtime, client, dist_client, storage);
+ let srv = SccacheServer::<_>::with_listener(
+ l,
+ runtime,
+ client,
+ dist_client,
+ storage,
+ raw_preprocessor_cache_storage,
+ );
Ok((
srv.local_addr()
.unwrap_or_else(|| crate::net::SocketAddr::UnixAbstract(p.to_vec())),
@@ -584,6 +603,7 @@ impl SccacheServer {
client: Client,
dist_client: DistClientContainer,
storage: Arc,
+ preprocessor_cache_storage: Arc,
) -> Result {
let addr = crate::net::SocketAddr::with_port(port);
let listener = runtime.block_on(tokio::net::TcpListener::bind(addr.as_net().unwrap()))?;
@@ -594,6 +614,7 @@ impl SccacheServer {
client,
dist_client,
storage,
+ preprocessor_cache_storage,
))
}
}
@@ -605,13 +626,22 @@ impl SccacheServer {
client: Client,
dist_client: DistClientContainer,
storage: Arc,
+ preprocessor_cache_storage: Arc,
) -> Self {
// Prepare the service which we'll use to service all incoming TCP
// connections.
let (tx, rx) = mpsc::channel(1);
let (wait, info) = WaitUntilZero::new();
let pool = runtime.handle().clone();
- let service = SccacheService::new(dist_client, storage, &client, pool, tx, info);
+ let service = SccacheService::new(
+ dist_client,
+ storage,
+ preprocessor_cache_storage,
+ &client,
+ pool,
+ tx,
+ info,
+ );
SccacheServer {
runtime,
@@ -631,8 +661,13 @@ impl SccacheServer {
/// Set the storage this server will use.
#[allow(dead_code)]
- pub fn set_storage(&mut self, storage: Arc) {
+ pub fn set_storage(
+ &mut self,
+ storage: Arc,
+ preprocessor_cache_storage: Arc,
+ ) {
self.service.storage = storage;
+ self.service.preprocessor_cache_storage = preprocessor_cache_storage;
}
/// Returns a reference to a thread pool to run work on
@@ -792,6 +827,9 @@ where
/// Cache storage.
storage: Arc,
+ /// Preprocessor cache storage.
+ preprocessor_cache_storage: Arc,
+
/// A cache of known compiler info.
compilers: Arc>>,
@@ -917,6 +955,7 @@ where
pub fn new(
dist_client: DistClientContainer,
storage: Arc,
+ preprocessor_cache_storage: Arc,
client: &Client,
rt: tokio::runtime::Handle,
tx: mpsc::Sender,
@@ -926,6 +965,7 @@ where
stats: Arc::default(),
dist_client: Arc::new(dist_client),
storage,
+ preprocessor_cache_storage,
compilers: Arc::default(),
compiler_proxies: Arc::default(),
rt,
@@ -937,6 +977,7 @@ where
pub fn mock_with_storage(
storage: Arc,
+ preprocessor_cache_storage: Arc,
rt: tokio::runtime::Handle,
) -> SccacheService {
let (tx, _) = mpsc::channel(1);
@@ -947,6 +988,7 @@ where
stats: Arc::default(),
dist_client: Arc::new(dist_client),
storage,
+ preprocessor_cache_storage,
compilers: Arc::default(),
compiler_proxies: Arc::default(),
rt,
@@ -960,6 +1002,7 @@ where
pub fn mock_with_dist_client(
dist_client: Arc,
storage: Arc,
+ preprocessor_cache_storage: Arc,
rt: tokio::runtime::Handle,
) -> SccacheService {
let (tx, _) = mpsc::channel(1);
@@ -980,6 +1023,7 @@ where
dist_client,
))),
storage,
+ preprocessor_cache_storage,
compilers: Arc::default(),
compiler_proxies: Arc::default(),
rt: rt.clone(),
@@ -1043,7 +1087,12 @@ where
/// Get info and stats about the cache.
async fn get_info(&self) -> Result {
let stats = self.stats.lock().await.clone();
- ServerInfo::new(stats, Some(&*self.storage)).await
+ ServerInfo::new(
+ stats,
+ Some(&*self.storage),
+ Some(&*self.preprocessor_cache_storage),
+ )
+ .await
}
/// Zero stats about the cache.
@@ -1339,6 +1388,7 @@ where
client,
me.creator.clone(),
me.storage.clone(),
+ me.preprocessor_cache_storage.clone(),
arguments,
cwd,
env_vars,
@@ -1927,24 +1977,27 @@ fn set_percentage_stat(
}
impl ServerInfo {
- pub async fn new(stats: ServerStats, storage: Option<&dyn Storage>) -> Result {
- let cache_location;
- let use_preprocessor_cache_mode;
- let cache_size;
- let max_cache_size;
+ pub async fn new(
+ stats: ServerStats,
+ storage: Option<&dyn Storage>,
+ preprocessor_cache_storage: Option<&dyn PreprocessorCacheStorage>,
+ ) -> Result {
+ let mut cache_location = String::new();
+ let mut cache_size = None;
+ let mut max_cache_size = None;
if let Some(storage) = storage {
cache_location = storage.location();
- use_preprocessor_cache_mode = storage
- .preprocessor_cache_mode_config()
- .use_preprocessor_cache_mode;
(cache_size, max_cache_size) =
futures::try_join!(storage.current_size(), storage.max_size())?;
- } else {
- cache_location = String::new();
- use_preprocessor_cache_mode = false;
- cache_size = None;
- max_cache_size = None;
}
+
+ let mut use_preprocessor_cache_mode = false;
+ if let Some(preprocessor_storage) = preprocessor_cache_storage {
+ let config = preprocessor_storage.get_config();
+
+ use_preprocessor_cache_mode = config.use_preprocessor_cache_mode;
+ }
+
let version = env!("CARGO_PKG_VERSION").to_string();
Ok(ServerInfo {
stats,
@@ -1965,18 +2018,16 @@ impl ServerInfo {
self.cache_location,
name_width = name_width
);
- if self.cache_location.starts_with("Local disk") {
- println!(
- "{: MockPreprocessorCacheStorage {
+ Self {
+ config: PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode,
+ ..Default::default()
+ },
+ }
+ }
+}
+
+#[async_trait]
+impl PreprocessorCacheStorage for MockPreprocessorCacheStorage {
+ fn get_config(&self) -> &PreprocessorCacheModeConfig {
+ &self.config
+ }
+
+ // TODO Implement get_preprocessor_cache_entry and put_preprocessor_cache_entry
+}
diff --git a/src/test/mock_storage.rs b/src/test/mock_storage.rs
index 00a6aa7c8..5cfa391d7 100644
--- a/src/test/mock_storage.rs
+++ b/src/test/mock_storage.rs
@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use crate::cache::{Cache, CacheWrite, PreprocessorCacheModeConfig, Storage};
+use crate::cache::{Cache, CacheWrite, Storage};
use crate::errors::*;
use async_trait::async_trait;
use futures::channel::mpsc;
@@ -26,18 +26,16 @@ pub struct MockStorage {
rx: Arc>>>,
tx: mpsc::UnboundedSender>,
delay: Option,
- preprocessor_cache_mode: bool,
}
impl MockStorage {
/// Create a new `MockStorage`. if `delay` is `Some`, wait for that amount of time before returning from operations.
- pub(crate) fn new(delay: Option, preprocessor_cache_mode: bool) -> MockStorage {
+ pub(crate) fn new(delay: Option) -> MockStorage {
let (tx, rx) = mpsc::unbounded();
Self {
tx,
rx: Arc::new(Mutex::new(rx)),
delay,
- preprocessor_cache_mode,
}
}
@@ -74,10 +72,4 @@ impl Storage for MockStorage {
async fn max_size(&self) -> Result> {
Ok(None)
}
- fn preprocessor_cache_mode_config(&self) -> PreprocessorCacheModeConfig {
- PreprocessorCacheModeConfig {
- use_preprocessor_cache_mode: self.preprocessor_cache_mode,
- ..Default::default()
- }
- }
}
diff --git a/src/test/mod.rs b/src/test/mod.rs
index a78dc2daa..0c1ffacf1 100644
--- a/src/test/mod.rs
+++ b/src/test/mod.rs
@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+pub mod mock_preprocessor_cache;
pub mod mock_storage;
#[macro_use]
pub mod utils;
diff --git a/src/test/tests.rs b/src/test/tests.rs
index ee41f6c2e..e6dac5da6 100644
--- a/src/test/tests.rs
+++ b/src/test/tests.rs
@@ -12,10 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+use crate::cache::CacheMode;
use crate::cache::disk::DiskCache;
-use crate::cache::{CacheMode, PreprocessorCacheModeConfig};
use crate::client::connect_to_server;
use crate::commands::{do_compile, request_shutdown, request_stats};
+use crate::config::PreprocessorCacheModeConfig;
use crate::jobserver::Client;
use crate::mock_command::*;
use crate::server::{DistClientContainer, SccacheServer, ServerMessage};
@@ -83,12 +84,28 @@ where
&cache_dir,
cache_size,
runtime.handle(),
- PreprocessorCacheModeConfig::default(),
CacheMode::ReadWrite,
));
+ let preprocessor_cache_storage =
+ Arc::new(crate::cache::preprocessor_cache::PreprocessorCache::new(
+ &PreprocessorCacheModeConfig {
+ dir: Some(cache_dir),
+ max_size: cache_size / 10,
+ use_preprocessor_cache_mode: true,
+ ..Default::default()
+ },
+ ));
let client = Client::new();
- let srv = SccacheServer::new(0, runtime, client, dist_client, storage).unwrap();
+ let srv = SccacheServer::new(
+ 0,
+ runtime,
+ client,
+ dist_client,
+ storage,
+ preprocessor_cache_storage,
+ )
+ .unwrap();
let mut srv: SccacheServer<_, Arc>> = srv;
let addr = srv.local_addr().unwrap();
assert!(matches!(addr, crate::net::SocketAddr::Net(a) if a.port() > 0));
diff --git a/tests/harness/mod.rs b/tests/harness/mod.rs
index 4a1be898c..7923ba7c3 100644
--- a/tests/harness/mod.rs
+++ b/tests/harness/mod.rs
@@ -163,10 +163,10 @@ pub fn sccache_client_cfg(
let disk_cache = sccache::config::DiskCacheConfig {
dir: tmpdir.join(cache_relpath),
- preprocessor_cache_mode: sccache::config::PreprocessorCacheModeConfig {
- use_preprocessor_cache_mode: preprocessor_cache_mode,
- ..Default::default()
- },
+ ..Default::default()
+ };
+ let preprocessor_cache_config = sccache::config::PreprocessorCacheModeConfig {
+ use_preprocessor_cache_mode: preprocessor_cache_mode,
..Default::default()
};
sccache::config::FileConfig {
@@ -180,6 +180,7 @@ pub fn sccache_client_cfg(
s3: None,
webdav: None,
oss: None,
+ preprocessor: Some(preprocessor_cache_config),
cos: None,
},
dist: sccache::config::DistConfig {