April 21, 2021

DESIGN VISIONARY SCIENCE by a TECHPRENEUR

"A hero is not chosen but the opportunities make him force to take a step no matter how much it costs!"

Global Cyber Security News

Here my intention is to list the latest most important cyber security news from world wide known websites. We can follow latest known development in cyber security sector. Since there are risks and uncertainties in world due to medicine problems, there can also be some risks in pc-mobile phone security risks because of cyber technology. So we can colloborate to check the latest news in one page in favor of community. You can see the latest news rss feeds systems in one page here, then you can check the source webpages for your works. For more details you can see the source pages. Here there are simply listed!

  • A New Standard for Mobile App Security
    by Google on April 15, 2021 at 1:00 pm

    Posted by Brooke Davis and Eugene Liderman, Android Security and Privacy TeamWith all of the challenges from this past year, users have become increasingly dependent on their mobile devices to create fitness routines, stay connected with loved ones, work remotely, and order things like groceries with ease. According to eMarketer, in 2020 users spent over three and a half hours per day using mobile apps. With so much time spent on mobile devices, ensuring the safety of mobile apps is more important than ever. Despite the importance of digital security, there isn’t a consistent industry standard for assessing mobile apps. Existing guidelines tend to be either too lightweight or too onerous for the average developer, and lack a compliance arm. That’s why we're excited to share ioXt’s announcement of a new Mobile Application Profile which provides a set of security and privacy requirements with defined acceptance criteria which developers can certify their apps against. Over 20 industry stakeholders, including Google, Amazon, and a number of certified labs such as NCC Group and Dekra, as well as automated mobile app security testing vendors like NowSecure collaborated to develop this new security standard for mobile apps. We’ve seen early interest from Internet of Things (IoT) and virtual private network (VPN) developers, however the standard is appropriate for any cloud connected service such as social, messaging, fitness, or productivity apps. The Internet of Secure Things Alliance (ioXt) manages a security compliance assessment program for connected devices. ioXt has over 300 members across various industries, including Google, Amazon, Facebook, T-Mobile, Comcast, Zigbee Alliance, Z-Wave Alliance, Legrand, Resideo, Schneider Electric, and many others. With so many companies involved, ioXt covers a wide range of device types, including smart lighting, smart speakers, and webcams, and since most smart devices are managed through apps, they have expanded coverage to include mobile apps with the launch of this profile. The ioXt Mobile Application Profile provides a minimum set of commercial best practices for all cloud connected apps running on mobile devices. This security baseline helps mitigate against common threats and reduces the probability of significant vulnerabilities. The profile leverages existing standards and principles set forth by OWASP MASVS and the VPN Trust Initiative, and allows developers to differentiate security capabilities around cryptography, authentication, network security, and vulnerability disclosure program quality. The profile also provides a framework to evaluate app category specific requirements which may be applied based on the features contained in the app. For example, an IoT app only needs to certify under the Mobile Application profile, whereas a VPN app must comply with the Mobile Application profile, plus the VPN extension. Certification allows developers to demonstrate product safety and we’re excited about the opportunity for this standard to push the industry forward. We observed that app developers were very quick to resolve any issues that were identified during their blackbox evaluations against this new standard, oftentimes with turnarounds in a matter of days. At launch, the following apps have been certified: Comcast, ExpressVPN, GreenMAX, Hubspace, McAfee Innovations, NordVPN, OpenVPN for Android, Private Internet Access, VPN Private, as well as the Google One app, including VPN by Google One. We look forward to seeing adoption of the standard grow over time and for those app developers that are already investing in security best practices to be able to highlight their efforts. The standard also serves as a guiding light to inspire more developers to invest in mobile app security. If you are interested in learning more about the ioXt Alliance and how to get your app certified, visit https://compliance.ioxtalliance.org/sign-up and check out Android’s guidelines for building secure apps here.

  • Rust in the Linux kernel
    by Google on April 14, 2021 at 11:27 pm

    Posted by Wedson Almeida Filho, Android Team In our previous post, we announced that Android now supports the Rust programming language for developing the OS itself. Related to this, we are also participating in the effort to evaluate the use of Rust as a supported language for developing the Linux kernel. In this post, we discuss some technical aspects of this work using a few simple examples. C has been the language of choice for writing kernels for almost half a century because it offers the level of control and predictable performance required by such a critical component. Density of memory safety bugs in the Linux kernel is generally quite low due to high code quality, high standards of code review, and carefully implemented safeguards. However, memory safety bugs do still regularly occur. On Android, vulnerabilities in the kernel are generally considered high-severity because they can result in a security model bypass due to the privileged mode that the kernel runs in. We feel that Rust is now ready to join C as a practical language for implementing the kernel. It can help us reduce the number of potential bugs and security vulnerabilities in privileged code while playing nicely with the core kernel and preserving its performance characteristics. Supporting Rust We developed an initial prototype of the Binder driver to allow us to make meaningful comparisons between the safety and performance characteristics of the existing C version and its Rust counterpart. The Linux kernel has over 30 million lines of code, so naturally our goal is not to convert it all to Rust but rather to allow new code to be written in Rust. We believe this incremental approach allows us to benefit from the kernel’s existing high-performance implementation while providing kernel developers with new tools to improve memory safety and maintain performance going forward. We joined the Rust for Linux organization, where the community had already done and continues to do great work toward adding Rust support to the Linux kernel build system. We also need designs that allow code in the two languages to interact with each other: we're particularly interested in safe, zero-cost abstractions that allow Rust code to use kernel functionality written in C, and how to implement functionality in idiomatic Rust that can be called seamlessly from the C portions of the kernel. Since Rust is a new language for the kernel, we also have the opportunity to enforce best practices in terms of documentation and uniformity. For example, we have specific machine-checked requirements around the usage of unsafe code: for every unsafe function, the developer must document the requirements that need to be satisfied by callers to ensure that its usage is safe; additionally, for every call to unsafe functions (or usage of unsafe constructs like dereferencing a raw pointer), the developer must document the justification for why it is safe to do so. Just as important as safety, Rust support needs to be convenient and helpful for developers to use. Let’s get into a few examples of how Rust can assist kernel developers in writing drivers that are safe and correct. Example driver We'll use an implementation of a semaphore character device. Each device has a current value; writes of n bytes result in the device value being incremented by n; reads decrement the value by 1 unless the value is 0, in which case they will block until they can decrement the count without going below 0. Suppose semaphore is a file representing our device. We can interact with it from the shell as follows: > cat semaphore When semaphore is a newly initialized device, the command above will block because the device's current value is 0. It will be unblocked if we run the following command from another shell because it increments the value by 1, which allows the original read to complete: > echo -n a > semaphore We could also increment the count by more than 1 if we write more data, for example: > echo -n abc > semaphore increments the count by 3, so the next 3 reads won't block. To allow us to show a few more aspects of Rust, we'll add the following features to our driver: remember what the maximum value was throughout the lifetime of a device, and remember how many reads each file issued on the device. We'll now show how such a driver would be implemented in Rust, contrasting it with a C implementation. We note, however, we are still early on so this is all subject to change in the future. How Rust can assist the developer is the aspect that we'd like to emphasize. For example, at compile time it allows us to eliminate or greatly reduce the chances of introducing classes of bugs, while at the same time remaining flexible and having minimal overhead. Character devices A developer needs to do the following to implement a driver for a new character device in Rust: Implement the FileOperations trait: all associated functions are optional, so the developer only needs to implement the relevant ones for their scenario. They relate to the fields in C's struct file_operations. Implement the FileOpener trait: it is a type-safe equivalent to C's open field of struct file_operations. Register the new device type with the kernel: this lets the kernel know what functions need to be called in response to files of this new type being operated on.The following outlines how the first two steps of our example compare in Rust and C: impl FileOpener<Arc<Semaphore>> for FileState {fn open( shared: &Arc<Semaphore> ) -> KernelResult<Box<Self>> { [...] }}impl FileOperations for FileState {type Wrapper = Box<Self>;fn read( &self, _: &File, data: &mut UserSlicePtrWriter, offset: u64 ) -> KernelResult<usize> { [...] }fn write( &self, data: &mut UserSlicePtrReader, _offset: u64 ) -> KernelResult<usize> { [...] }fn ioctl( &self, file: &File, cmd: &mut IoctlCommand ) -> KernelResult<i32> { [...] }fn release(_obj: Box<Self>, _file: &File) { [...] } declare_file_operations!(read, write, ioctl);} static int semaphore_open(struct inode *nodp,struct file *filp){struct semaphore_state *shared = container_of(filp->private_data,struct semaphore_state, miscdev); [...]}staticssize_t semaphore_write(struct file *filp,const char __user *buffer,size_t count, loff_t *ppos){struct file_state *state = filp->private_data; [...]}staticssize_t semaphore_read(struct file *filp,char __user *buffer,size_t count, loff_t *ppos){struct file_state *state = filp->private_data; [...]}staticlong semaphore_ioctl(struct file *filp,unsigned int cmd,unsigned long arg){struct file_state *state = filp->private_data; [...]}staticint semaphore_release(struct inode *nodp,struct file *filp){struct file_state *state = filp->private_data; [...]}static const struct file_operations semaphore_fops = { .owner = THIS_MODULE, .open = semaphore_open, .read = semaphore_read, .write = semaphore_write, .compat_ioctl = semaphore_ioctl, .release = semaphore_release,}; Character devices in Rust benefit from a number of safety features: Per-file state lifetime management: FileOpener::open returns an object whose lifetime is owned by the caller from then on. Any object that implements the PointerWrapper trait can be returned, and we provide implementations for Box<T> and Arc<T>, so developers that use Rust's idiomatic heap-allocated or reference-counted pointers have no additional requirements. All associated functions in FileOperations receive non-mutable references to self (more about this below), except the release function, which is the last function to be called and receives the plain object back (and its ownership with it). The release implementation can then defer the object destruction by transferring its ownership elsewhere, or destroy it then; in the case of a reference-counted object, 'destruction' means decrementing the reference count (and actual object destruction if the count goes to zero). That is, we use Rust's ownership discipline when interacting with C code by handing the C portion ownership of a Rust object, allowing it to call functions implemented in Rust, then eventually giving ownership back. So as long as the C code is correct, the lifetime of Rust file objects work seamlessly as well, with the compiler enforcing correct lifetime management on the Rust side, for example: open cannot return stack-allocated pointers or heap-allocated objects containing pointers to the stack, ioctl/read/write cannot free (or modify without synchronization) the contents of the object stored in filp->private_data, etc. Non-mutable references: the associated functions called between open and release all receive non-mutable references to self because they can be called concurrently by multiple threads and Rust aliasing rules prohibit more than one mutable reference to an object at any given time. If a developer needs to modify some state (and they generally do), they can do so via interior mutability: mutable state can be wrapped in a Mutex<T> or SpinLock<T> (or atomics) and safely modified through them. This prevents, at compile-time, bugs where a developer fails to acquire the appropriate lock when accessing a field (the field is inaccessible), or when a developer fails to wrap a field with a lock (the field is read-only). Per-device state: when file instances need to share per-device state, which is a very common occurrence in drivers, they can do so safely in Rust. When a device is registered, a typed object can be provided and a non-mutable reference to it is provided when FileOperation::open is called. In our example, the shared object is wrapped in Arc<T>, so files can safely clone and hold on to a reference to them. The reason FileOperation is its own trait (as opposed to, for example, open being part of the FileOperations trait) is to allow a single file implementation to be registered in different ways. This eliminates opportunities for developers to get the wrong data when trying to retrieve shared state. For example, in C when a miscdevice is registered, a pointer to it is available in filp->private_data; when a cdev is registered, a pointer to it is available in inode->i_cdev. These structs are usually embedded in an outer struct that contains the shared state, so developers usually use the container_of macro to recover the shared state. Rust encapsulates all of this and the potentially troublesome pointer casts in a safe abstraction. Static typing: we take advantage of Rust's support for generics to implement all of the above functions and types with static types. So there are no opportunities for a developer to convert an untyped variable or field to the wrong type. The C code in the table above has casts from an essentially untyped (void *) pointer to the desired type at the start of each function: this is likely to work fine when first written, but may lead to bugs as the code evolves and assumptions change. Rust would catch any such mistakes at compile time. File operations: as we mentioned before, a developer needs to implement the FileOperations trait to customize the behavior of their device. They do this with a block starting with impl FileOperations for Device, where Device is the type implementing the file behavior (FileState in our example). Once inside this block, tools know that only a limited number of functions can be defined, so they can automatically insert the prototypes. (Personally, I use neovim and the rust-analyzer LSP server.) While we use this trait in Rust, the C portion of the kernel still requires an instance of struct file_operations. The kernel crate automatically generates one from the trait implementation (and optionally the declare_file_operations macro): although it has code to generate the correct struct, it is all const, so evaluated at compile-time with zero runtime cost. Ioctl handling For a driver to provide a custom ioctl handler, it needs to implement the ioctl function that is part of the FileOperations trait, as exemplified in the table below. fn ioctl( &self, file: &File, cmd: &mut IoctlCommand) -> KernelResult<i32> { cmd.dispatch(self, file)}impl IoctlHandler for FileState {fn read( &self, _file: &File, cmd: u32, writer: &mut UserSlicePtrWriter ) -> KernelResult<i32> {match cmd { IOCTL_GET_READ_COUNT => { writer.write( &self .read_count .load(Ordering::Relaxed))?;Ok(0) } _ => Err(Error::EINVAL), } }fn write( &self, _file: &File, cmd: u32, reader: &mut UserSlicePtrReader ) -> KernelResult<i32> {match cmd { IOCTL_SET_READ_COUNT => {self .read_count .store(reader.read()?, Ordering::Relaxed);Ok(0) } _ => Err(Error::EINVAL), } }} #define IOCTL_GET_READ_COUNT _IOR('c', 1, u64)#define IOCTL_SET_READ_COUNT _IOW('c', 1, u64)staticlong semaphore_ioctl(struct file *filp,unsigned int cmd,unsigned long arg){struct file_state *state = filp->private_data;void __user *buffer = (void __user *)arg; u64 value;switch (cmd) {case IOCTL_GET_READ_COUNT: value = atomic64_read(&state->read_count);if (copy_to_user(buffer, &value, sizeof(value)))return -EFAULT;return 0;case IOCTL_SET_READ_COUNT:if (copy_from_user(&value, buffer, sizeof(value)))return -EFAULT; atomic64_set(&state->read_count, value);return 0;default:return -EINVAL; }} Ioctl commands are standardized such that, given a command, we know whether a user buffer is provided, its intended use (read, write, both, none), and its size. In Rust, we provide a dispatcher (accessible by calling cmd.dispatch) that uses this information to automatically create user memory access helpers and pass them to the caller. A driver is not required to use this though. If, for example, it doesn't use the standard ioctl encoding, Rust offers the flexibility of simply calling cmd.raw to extract the raw arguments and using them to handle the ioctl (potentially with unsafe code, which will need to be justified). However, if a driver implementation does use the standard dispatcher, it will benefit from not having to implement any unsafe code, and: The pointer to user memory is never a native pointer, so the developer cannot accidentally dereference it. The types that allow the driver to read from user space only allow data to be read once, so we eliminate the risk of time-of-check to time-of-use (TOCTOU) bugs because when a driver needs to access data twice, it needs to copy it to kernel memory, where an attacker is not allowed to modify it. Excluding unsafe blocks, there is no way to introduce this class of bugs in Rust. No accidental overflow of the user buffer: we'll never read or write past the end of the user buffer because this is enforced automatically based on the size encoded in the ioctl command. In our example above, the implementation of IOCTL_GET_READ_COUNT only has access to an instance of UserSlicePtrWriter, which limits the number of writable bytes to sizeof(u64) as encoded in the ioctl command. No mixing of reads and writes: we'll never write buffers for ioctls that are only meant to read and never read buffers for ioctls that are only meant to write. This is enforced by read and write handlers only getting instances of UserSlicePtrWriter and UserSlicePtrReader respectively. All of the above could potentially also be done in C, but it's very easy for developers to (likely unintentionally) break contracts that lead to unsafety; Rust requires unsafe blocks for this, which should only be used in rare cases and brings additional scrutiny. Additionally, Rust offers the following: The types used to read and write user memory do not implement the Send and Sync traits, which means that they (and pointers to them) are not safe to be used in another thread context. In Rust, if a driver developer attempted to write code that passed one of these objects to another thread (where it wouldn't be safe to use them because it isn't necessarily in the right memory manager context), they would get a compilation error. When calling IoctlCommand::dispatch, one might understandably think that we need dynamic dispatching to reach the actual handler implementation (which would incur additional cost in comparison to C), but we don't. Our usage of generics will lead the compiler to monomorphize the function, which will result in static function calls that can even be inlined if the optimizer so chooses. Locking and condition variables We allow developers to use mutexes and spinlocks to provide interior mutability. In our example, we use a mutex to protect mutable data; in the tables below we show the data structures we use in C and Rust, and how we implement a wait until the count is nonzero so that we can satisfy a read: struct SemaphoreInner { count: usize, max_seen: usize,}struct Semaphore { changed: CondVar, inner: Mutex<SemaphoreInner>,}struct FileState { read_count: AtomicU64, shared: Arc<Semaphore>,}struct semaphore_state {struct kref ref;struct miscdevice miscdev;wait_queue_head_t changed;struct mutex mutex;size_t count;size_t max_seen;};struct file_state {atomic64_t read_count;struct semaphore_state *shared;}; fn consume(&self) -> KernelResult {let mut inner = self.shared.inner.lock();while inner.count == 0 {if self.shared.changed.wait(&mut inner) {return Err(Error::EINTR); } } inner.count -= 1;Ok(())}static int semaphore_consume(struct semaphore_state *state){ DEFINE_WAIT(wait); mutex_lock(&state->mutex);while (state->count == 0) { prepare_to_wait(&state->changed, &wait, TASK_INTERRUPTIBLE); mutex_unlock(&state->mutex); schedule(); finish_wait(&state->changed, &wait);if (signal_pending(current))return -EINTR; mutex_lock(&state->mutex); } state->count--; mutex_unlock(&state->mutex);return 0;} We note that such waits are not uncommon in the existing C code, for example, a pipe waiting for a "partner" to write, a unix-domain socket waiting for data, an inode search waiting for completion of a delete, or a user-mode helper waiting for state change. The following are benefits from the Rust implementation: The Semaphore::inner field is only accessible when the lock is held, through the guard returned by the lock function. So developers cannot accidentally read or write protected data without locking it first. In the C example above, count and max_seen in semaphore_state are protected by mutex, but there is no enforcement that the lock is held while they're accessed. Resource Acquisition Is Initialization (RAII): the lock is unlocked automatically when the guard (inner in this case) goes out of scope. This ensures that locks are always unlocked: if the developer needs to keep a lock locked, they can keep the guard alive, for example, by returning the guard itself; conversely, if they need to unlock before the end of the scope, they can explicitly do it by calling the drop function. Developers can use any lock that implements the Lock trait, which includes Mutex and SpinLock, at no additional runtime cost when compared to a C implementation. Other synchronization constructs, including condition variables, also work transparently and with zero additional run-time cost. Rust implements condition variables using kernel wait queues. This allows developers to benefit from atomic release of the lock and putting the thread to sleep without having to reason about low-level kernel scheduler functions. In the C example above, semaphore_consume is a mix of semaphore logic and subtle Linux scheduling: for example, the code is incorrect if mutex_unlock is called before prepare_to_wait because it may result in a wake up being missed. No unsynchronized access: as we mentioned before, variables shared by multiple threads/CPUs must be read-only, with interior mutability being the solution for cases when mutability is needed. In addition to the example with locks above, the ioctl example in the previous section also has an example of using an atomic variable; Rust also requires developers to specify how memory is to be synchronized by atomic accesses. In the C part of the example, we happen to use atomic64_t, but the compiler won't alert a developer to this need. Error handling and control flow In the tables below, we show how open, read, and write are implemented in our example driver: fn read( &self, _: &File, data: &mut UserSlicePtrWriter, offset: u64) -> KernelResult<usize> {if data.is_empty() || offset > 0 {return Ok(0); }self.consume()?; data.write_slice(&[0u8; 1])?;self.read_count.fetch_add(1, Ordering::Relaxed);Ok(1)} staticssize_t semaphore_read(struct file *filp,char __user *buffer,size_t count, loff_t *ppos){struct file_state *state = filp->private_data;char c = 0;int ret;if (count == 0 || *ppos > 0)return 0; ret = semaphore_consume(state->shared);if (ret)return ret;if (copy_to_user(buffer, &c, sizeof(c)))return -EFAULT; atomic64_add(1, &state->read_count); *ppos += 1;return 1;} fn write( &self, data: &mut UserSlicePtrReader, _offset: u64) -> KernelResult<usize> { {let mut inner = self.shared.inner.lock(); inner.count = inner.count.saturating_add(data.len());if inner.count > inner.max_seen { inner.max_seen = inner.count; } }self.shared.changed.notify_all();Ok(data.len())}staticssize_t semaphore_write(struct file *filp,const char __user *buffer,size_t count, loff_t *ppos){struct file_state *state = filp->private_data;struct semaphore_state *shared = state->shared; mutex_lock(&shared->mutex); shared->count += count;if (shared->count < count) shared->count = SIZE_MAX;if (shared->count > shared->max_seen) shared->max_seen = shared->count; mutex_unlock(&shared->mutex); wake_up_all(&shared->changed);return count;} fn open( shared: &Arc<Semaphore>) -> KernelResult<Box<Self>> {Ok(Box::try_new(Self { read_count: AtomicU64::new(0), shared: shared.clone(), })?)}static int semaphore_open(struct inode *nodp,struct file *filp){struct semaphore_state *shared = container_of(filp->private_data,struct semaphore_state, miscdev);struct file_state *state; state = kzalloc(sizeof(*state), GFP_KERNEL);if (!state)return -ENOMEM; kref_get(&shared->ref); state->shared = shared; atomic64_set(&state->read_count, 0); filp->private_data = state;return 0;} They illustrate other benefits brought by Rust: The ? operator: it is used by the Rust open and read implementations to do error handling implicitly; the developer can focus on the semaphore logic, the resulting code being quite small and readable. The C versions have error-handling noise that can make them less readable. Required initialization: Rust requires all fields of a struct to be initialized on construction, so the developer can never accidentally fail to initialize a field; C offers no such facility. In our open example above, the developer of the C version could easily fail to call kref_get (even though all fields would have been initialized); in Rust, the user is required to call clone (which increments the ref count), otherwise they get a compilation error. RAII scoping: the Rust write implementation uses a statement block to control when inner goes out of scope and therefore the lock is released. Integer overflow behavior: Rust encourages developers to always consider how overflows should be handled. In our write example, we want a saturating one so that we don't end up with a zero value when adding to our semaphore. In C, we need to manually check for overflows, there is no additional support from the compiler. What's next The examples above are only a small part of the whole project. We hope it gives readers a glimpse of the kinds of benefits that Rust brings. At the moment we have nearly all generic kernel functionality needed by Binder neatly wrapped in safe Rust abstractions, so we are in the process of gathering feedback from the broader Linux kernel community with the intent of upstreaming the existing Rust support. We also continue to make progress on our Binder prototype, implement additional abstractions, and smooth out some rough edges. This is an exciting time and a rare opportunity to potentially influence how the Linux kernel is developed, as well as inform the evolution of the Rust language. We invite those interested to join us in Rust for Linux and attend our planned talk at Linux Plumbers Conference 2021! Thanks Nick Desaulniers, Kees Cook, and Adrian Taylor for contributions to this post. Special thanks to Jeff Vander Stoep for contributions and editing, and to Greg Kroah-Hartman for reviewing and contributing to the code examples.

  • Rust in the Android platform
    by Google on April 6, 2021 at 5:00 pm

    Posted by Jeff Vander Stoep and Stephen Hines, Android Team Correctness of code in the Android platform is a top priority for the security, stability, and quality of each Android release. Memory safety bugs in C and C++ continue to be the most-difficult-to-address source of incorrectness. We invest a great deal of effort and resources into detecting, fixing, and mitigating this class of bugs, and these efforts are effective in preventing a large number of bugs from making it into Android releases. Yet in spite of these efforts, memory safety bugs continue to be a top contributor of stability issues, and consistently represent ~70% of Android’s high severity security vulnerabilities. In addition to ongoing and upcoming efforts to improve detection of memory bugs, we are ramping up efforts to prevent them in the first place. Memory-safe languages are the most cost-effective means for preventing memory bugs. In addition to memory-safe languages like Kotlin and Java, we’re excited to announce that the Android Open Source Project (AOSP) now supports the Rust programming language for developing the OS itself. Systems programmingManaged languages like Java and Kotlin are the best option for Android app development. These languages are designed for ease of use, portability, and safety. The Android Runtime (ART) manages memory on behalf of the developer. The Android OS uses Java extensively, effectively protecting large portions of the Android platform from memory bugs. Unfortunately, for the lower layers of the OS, Java and Kotlin are not an option. Lower levels of the OS require systems programming languages like C, C++, and Rust. These languages are designed with control and predictability as goals. They provide access to low level system resources and hardware. They are light on resources and have more predictable performance characteristics.For C and C++, the developer is responsible for managing memory lifetime. Unfortunately, it's easy to make mistakes when doing this, especially in complex and multithreaded codebases. Rust provides memory safety guarantees by using a combination of compile-time checks to enforce object lifetime/ownership and runtime checks to ensure that memory accesses are valid. This safety is achieved while providing equivalent performance to C and C++. The limits of sandboxingC and C++ languages don’t provide these same safety guarantees and require robust isolation. All Android processes are sandboxed and we follow the Rule of 2 to decide if functionality necessitates additional isolation and deprivileging. The Rule of 2 is simple: given three options, developers may only select two of the following three options. For Android, this means that if code is written in C/C++ and parses untrustworthy input, it should be contained within a tightly constrained and unprivileged sandbox. While adherence to the Rule of 2 has been effective in reducing the severity and reachability of security vulnerabilities, it does come with limitations. Sandboxing is expensive: the new processes it requires consume additional overhead and introduce latency due to IPC and additional memory usage. Sandboxing doesn’t eliminate vulnerabilities from the code and its efficacy is reduced by high bug density, allowing attackers to chain multiple vulnerabilities together. Memory-safe languages like Rust help us overcome these limitations in two ways: Lowers the density of bugs within our code, which increases the effectiveness of our current sandboxing. Reduces our sandboxing needs, allowing introduction of new features that are both safer and lighter on resources. But what about all that existing C++? Of course, introducing a new programming language does nothing to address bugs in our existing C/C++ code. Even if we redirected the efforts of every software engineer on the Android team, rewriting tens of millions of lines of code is simply not feasible. The above analysis of the age of memory safety bugs in Android (measured from when they were first introduced) demonstrates why our memory-safe language efforts are best focused on new development and not on rewriting mature C/C++ code. Most of our memory bugs occur in new or recently modified code, with about 50% being less than a year old. The comparative rarity of older memory bugs may come as a surprise to some, but we’ve found that old code is not where we most urgently need improvement. Software bugs are found and fixed over time, so we would expect the number of bugs in code that is being maintained but not actively developed to go down over time. Just as reducing the number and density of bugs improves the effectiveness of sandboxing, it also improves the effectiveness of bug detection. Limitations of detection Bug detection via robust testing, sanitization, and fuzzing is crucial for improving the quality and correctness of all software, including software written in Rust. A key limitation for the most effective memory safety detection techniques is that the erroneous state must actually be triggered in instrumented code in order to be detected. Even in code bases with excellent test/fuzz coverage, this results in a lot of bugs going undetected. Another limitation is that bug detection is scaling faster than bug fixing. In some projects, bugs that are being detected are not always getting fixed. Bug fixing is a long and costly process. Each of these steps is costly, and missing any one of them can result in the bug going unpatched for some or all users. For complex C/C++ code bases, often there are only a handful of people capable of developing and reviewing the fix, and even with a high amount of effort spent on fixing bugs, sometimes the fixes are incorrect. Bug detection is most effective when bugs are relatively rare and dangerous bugs can be given the urgency and priority that they merit. Our ability to reap the benefits of improvements in bug detection require that we prioritize preventing the introduction of new bugs. Prioritizing prevention Rust modernizes a range of other language aspects, which results in improved correctness of code: Memory safety - enforces memory safety through a combination of compiler and run-time checks. Data concurrency - prevents data races. The ease with which this allows users to write efficient, thread-safe code has given rise to Rust’s Fearless Concurrency slogan. More expressive type system - helps prevent logical programming bugs (e.g. newtype wrappers, enum variants with contents). References and variables are immutable by default - assist the developer in following the security principle of least privilege, marking a reference or variable mutable only when they actually intend it to be so. While C++ has const, it tends to be used infrequently and inconsistently. In comparison, the Rust compiler assists in avoiding stray mutability annotations by offering warnings for mutable values which are never mutated. Better error handling in standard libraries - wrap potentially failing calls in Result, which causes the compiler to require that users check for failures even for functions which do not return a needed value. This protects against bugs like the Rage Against the Cage vulnerability which resulted from an unhandled error. By making it easy to propagate errors via the ? operator and optimizing Result for low overhead, Rust encourages users to write their fallible functions in the same style and receive the same protection. Initialization - requires that all variables be initialized before use. Uninitialized memory vulnerabilities have historically been the root cause of 3-5% of security vulnerabilities on Android. In Android 11, we started auto initializing memory in C/C++ to reduce this problem. However, initializing to zero is not always safe, particularly for things like return values, where this could become a new source of faulty error handling. Rust requires every variable be initialized to a legal member of its type before use, avoiding the issue of unintentionally initializing to an unsafe value. Similar to Clang for C/C++, the Rust compiler is aware of the initialization requirement, and avoids any potential performance overhead of double initialization. Safer integer handling - Overflow sanitization is on for Rust debug builds by default, encouraging programmers to specify a wrapping_add if they truly intend a calculation to overflow or saturating_add if they don’t. We intend to enable overflow sanitization for all builds in Android. Further, all integer type conversions are explicit casts: developers can not accidentally cast during a function call when assigning to a variable or when attempting to do arithmetic with other types. Where we go from here Adding a new language to the Android platform is a large undertaking. There are toolchains and dependencies that need to be maintained, test infrastructure and tooling that must be updated, and developers that need to be trained. For the past 18 months we have been adding Rust support to the Android Open Source Project, and we have a few early adopter projects that we will be sharing in the coming months. Scaling this to more of the OS is a multi-year project. Stay tuned, we will be posting more updates on this blog. Java is a registered trademark of Oracle and/or its affiliates.Thanks Matthew Maurer, Bram Bonne, and Lars Bergstrom for contributions to this post. Special thanks to our colleagues, Adrian Taylor for his insight into the age of memory vulnerabilities, and to Chris Palmer for his work on “The Rule of 2” and “The limits of Sandboxing”.

  • Announcing the Android Ready SE Alliance
    by Google on March 25, 2021 at 5:00 pm

    Posted by Sudhi Herle and Jason Wong, Android Team When the Pixel 3 launched in 2018, it had a new tamper-resistant hardware enclave called Titan M. In addition to being a root-of-trust for Pixel software and firmware, it also enabled tamper-resistant key storage for Android Apps using StrongBox. StrongBox is an implementation of the Keymaster HAL that resides in a hardware security module. It is an important security enhancement for Android devices and paved the way for us to consider features that were previously not possible. StrongBox and tamper-resistant hardware are becoming important requirements for emerging user features, including: Digital keys (car, home, office) Mobile Driver’s License (mDL), National ID, ePassports eMoney solutions (for example, Wallet) All these features need to run on tamper-resistant hardware to protect the integrity of the application executables and a user’s data, keys, wallet, and more. Most modern phones now include discrete tamper-resistant hardware called a Secure Element (SE). We believe this SE offers the best path for introducing these new consumer use cases in Android. In order to accelerate adoption of these new Android use cases, we are announcing the formation of the Android Ready SE Alliance. SE vendors are joining hands with Google to create a set of open-source, validated, and ready-to-use SE Applets. Today, we are launching the General Availability (GA) version of StrongBox for SE. This applet is qualified and ready for use by our OEM partners. It is currently available from Giesecke+Devrient, Kigen, NXP, STMicroelectronics, and Thales. It is important to note that these features are not just for phones and tablets. StrongBox is also applicable to WearOS, Android Auto Embedded, and Android TV. Using Android Ready SE in a device requires the OEM to: Pick the appropriate, validated hardware part from their SE vendor Enable SE to be initialized from the bootloader and provision the root-of-trust (RoT) parameters through the SPI interface or cryptographic binding Work with Google to provision Attestation Keys/Certificates in the SE factory Use the GA version of the StrongBox for the SE applet, adapted to your SE Integrate HAL code Enable an SE upgrade mechanism Run CTS/VTS tests for StrongBox to verify that the integration is done correctly We are working with our ecosystem to prioritize and deliver the following Applets in conjunction with corresponding Android feature releases: Mobile driver’s license and Identity Credentials Digital car keys We already have several Android OEMs adopting Android Ready SE for their devices. We look forward to working with our OEM partners to bring these next generation features for our users. Please visit our Android Security and Privacy developer site for more info.

  • Announcing the winners of the 2020 GCP VRP Prize
    by Sarah O'Rourke on March 17, 2021 at 2:40 pm

    Posted by Harshvardhan Sharma, Information Security Engineer, Google We first announced the GCP VRP Prize in 2019 to encourage security researchers to focus on the security of Google Cloud Platform (GCP), in turn helping us make GCP more secure for our users, customers, and the internet at large. In the first iteration of the prize, we awarded $100,000 to the winning write-up about a security vulnerability in GCP. We also announced that we would reward the top 6 submissions in 2020 and increased the total prize money to $313,337.2020 turned out to be an amazing year for the Google Vulnerability Reward Program. We received many high-quality vulnerability reports from our talented and prolific vulnerability researchers. Vulnerability reports received year-over-yearThis trend was reflected in the submissions we received for the GCP VRP Prize. After careful evaluation of the many innovative and high-impact vulnerability write-ups we received this year, we are excited to announce the winners of the 2020 GCP VRP Prize:First Prize, $133,337: Ezequiel Pereira for the report and write-up RCE in Google Cloud Deployment Manager. The bug discovered by Ezequiel allowed him to make requests to internal Google services, authenticated as a privileged service account. Here's a video that gives more details about the bug and the discovery process.Second Prize, $73,331: David Nechuta for the report and write-up 31k$ SSRF in Google Cloud Monitoring led to metadata exposure. David found a Server-side Request Forgery (SSRF) bug in Google Cloud Monitoring's uptime check feature. The bug could have been used to leak the authentication token of the service account used for these checks.Third Prize, $73,331: Dylan Ayrey and Allison Donovan for the report and write-up Fixing a Google Vulnerability. They pointed out issues in the default permissions associated with some of the service accounts used by GCP services.Fourth Prize, $31,337: Bastien Chatelard for the report and write-up Escaping GKE gVisor sandboxing using metadata. Bastien discovered a bug in the GKE gVisor sandbox's network policy implementation due to which the Google Compute Engine metadata API was accessible. Fifth Prize, $1,001: Brad Geesaman for the report and write-up CVE-2020-15157 "ContainerDrip" Write-up. The bug could allow an attacker to trick containerd into leaking instance metadata by supplying a malicious container image manifest.Sixth Prize, $1,000: Chris Moberly for the report and write-up Privilege Escalation in Google Cloud Platform's OS Login. The report demonstrates how an attacker can use DHCP poisoning to escalate their privileges on a Google Compute Engine VM.Congratulations to all the winners! If we have piqued your interest and you would like to enter the competition for a GCP VRP Prize in 2021, here’s a reminder on the requirements.Find a vulnerability in a GCP product (check out Google Cloud Free Program to get started)Report it to the VRP (you might get rewarded for it on top of the GCP VRP Prize!)Create a public write-upSubmit it hereMake sure to submit your VRP reports and write-ups before December 31, 2021 at 11:59 GMT. Good luck! You can learn more about the prize for this year here. We can't wait to see what our talented vulnerability researchers come up with this year!

  • Note to Self: Create Non-Exhaustive List of Competitors
    by BrianKrebs on April 20, 2021 at 9:46 pm

    What was the best news you heard so far this month? Mine was learning that KrebsOnSecurity is listed as a restricted competitor by Gartner Inc. [NYSE:IT] -- a $4 billion technology goliath whose analyst reports can move markets and shape the IT industry.

  • Did Someone at the Commerce Dept. Find a SolarWinds Backdoor in Aug. 2020?
    by BrianKrebs on April 16, 2021 at 12:57 pm

    On Aug. 13, 2020, someone uploaded a suspected malicious file to VirusTotal, a service that scans submitted files against more than five dozen antivirus and security products. Last month, Microsoft and FireEye identified that file as a newly-discovered fourth malware backdoor used in the sprawling SolarWinds supply chain hack. An analysis of the malicious file and other submissions by the same VirusTotal user suggest the account that initially flagged the backdoor as suspicious belongs to IT personnel at the National Telecommunications and Information Administration (NTIA), a division of the U.S. Commerce Department that handles telecommunications and Internet policy.

  • Microsoft Patch Tuesday, April 2021 Edition
    by BrianKrebs on April 13, 2021 at 11:12 pm

    Microsoft today released updates to plug at least 110 security holes in its Windows operating systems and other products. The patches include four security fixes for Microsoft Exchange Server -- the same systems that have been besieged by attacks on four separate (and zero-day) bugs in the email software over the past month. Redmond also patched a Windows flaw that is actively being exploited in the wild.

  • ParkMobile Breach Exposes License Plate Data, Mobile Numbers of 21M Users
    by BrianKrebs on April 12, 2021 at 10:18 pm

    Someone is selling account information for 21 million customers of ParkMobile, a mobile parking app that's popular in North America. The stolen data includes customer email addresses, phone numbers, license plate numbers, hashed passwords and mailing addresses.

  • Are You One of the 533M People Who Got Facebooked?
    by BrianKrebs on April 6, 2021 at 6:55 pm

    Ne'er-do-wells leaked personal data -- including phone numbers -- for some 553 million Facebook users this week. Facebook says the data was collected before 2020 when it changed things to prevent such information from being scraped from profiles. To my mind, this just reinforces the need to remove mobile phone numbers from all of your online accounts wherever feasible. Meanwhile, if you're a Facebook product user and want to learn if your data was leaked, there are easy ways to find out.

  • Biden Administration Imposes Sanctions on Russia for SolarWinds
    by Bruce Schneier on April 20, 2021 at 11:19 am

    On April 15, the Biden administration both formally attributed the SolarWinds espionage campaign to the Russian Foreign Intelligence Service (SVR), and imposed a series of sanctions designed to punish the country for the attack and deter future attacks.I will leave it to those with experience in foreign relations to convince me that the response is sufficient to deter future operations. To me, it feels like too little. The New York Times reports that “the sanctions will be among what President Biden’s aides say are ‘seen and unseen steps in response to the hacking,” which implies that there’s more we don’t know about. ...

  • Details on the Unlocking of the San Bernardino Terrorist’s iPhone
    by Bruce Schneier on April 19, 2021 at 11:08 am

    The Washington Post has published a long story on the unlocking of the San Bernardino Terrorist’s iPhone 5C in 2016. We all thought it was an Israeli company called Cellebrite. It was actually an Australian company called Azimuth Security.Azimuth specialized in finding significant vulnerabilities. Dowd, a former IBM X-Force researcher whom one peer called “the Mozart of exploit design,” had found one in open-source code from Mozilla that Apple used to permit accessories to be plugged into an iPhone’s lightning port, according to the person...

  • Friday Squid Blogging: Blobs of Squid Eggs Found Near Norway
    by Bruce Schneier on April 16, 2021 at 9:09 pm

    Divers find three-foot “blobs” — egg sacs of the squid Illex coindetii — off the coast of Norway.As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.Read my blog posting guidelines here.

  • Cybersecurity Experts to Follow on Twitter
    by Bruce Schneier on April 16, 2021 at 7:13 pm

    Security Boulevard recently listed the “Top-21 Cybersecurity Experts You Must Follow on Twitter in 2021.” I came in at #7. I thought that was pretty good, especially since I never tweet. My Twitter feed just mirrors my blog. (If you are one of the 134K people who read me from Twitter, “hi.”)

  • NSA Discloses Vulnerabilities in Microsoft Exchange
    by Bruce Schneier on April 16, 2021 at 11:23 am

    Amongst the 100+ vulnerabilities patch in this month’s Patch Tuesday, there are four in Microsoft Exchange that were disclosed by the NSA.

  • Malicious malware impacting reviews and ratings of application
    by Akshay Singla on April 9, 2021 at 11:16 am

    COVID-19 pandemic has confined a big part of the population indoors, doing their work and daily chores online....The post Malicious malware impacting reviews and ratings of application appeared first on Quick Heal Blog | Latest computer security news, tips, and advice.

  • Cyber threats against Macs are increasing! Are you prepared?
    by Quickheal on April 5, 2021 at 6:11 am

    Let’s get to the point immediately: if you use an Apple Mac system, it doesn’t mean that you...The post Cyber threats against Macs are increasing! Are you prepared? appeared first on Quick Heal Blog | Latest computer security news, tips, and advice.

  • The risks of downloading apps from unauthorized app stores
    by Quickheal on March 25, 2021 at 10:47 am

    As an avid smartphone user, do you get frustrated at not finding the app you want on the...The post The risks of downloading apps from unauthorized app stores appeared first on Quick Heal Blog | Latest computer security news, tips, and advice.

  • Zloader: Entailing Different Office Files
    by Anjali Raut on March 23, 2021 at 1:55 pm

    Zloader aka Terdot – a variant of the infamous Zeus banking malware is well known for aggressively using...The post Zloader: Entailing Different Office Files appeared first on Quick Heal Blog | Latest computer security news, tips, and advice.

  • Data of 21 Million VPN users breached
    by Jatin Sharma on March 18, 2021 at 10:58 am

    VPN is a prominent tool for enhanced online life. VPN consists of a tunnel that your encrypted data...The post Data of 21 Million VPN users breached appeared first on Quick Heal Blog | Latest computer security news, tips, and advice.

  • Poppy Gustafsson: the Darktrace tycoon in new cybersecurity era
    by Mark Sweney and Alex Hern on April 17, 2021 at 7:00 am

    Gustafsson’s firm, founded when she was 30, is marketed as a digital parallel of a human body fighting illnessPoppy Gustafsson runs a cutting-edge and gender-diverse cybersecurity firm on the brink of a £3bn stock market debut, but she is happy to reference pop culture classic the Terminator to help describe what Darktrace actually does.Launched in Cambridge eight years ago by an unlikely alliance of mathematicians, former spies from GCHQ and the US and artificial intelligence (AI) experts, Darktrace provides protection, enabling businesses to stay one step ahead of increasingly smarter and dangerous hackers and viruses. Related: Huge rise in hacking attacks on home workers during lockdown Continue reading...

  • FBI hacks vulnerable US computers to fix malicious malware
    by Alex Hern UK technology editor on April 14, 2021 at 12:10 pm

    US justice department says bureau hacked devices to remove malware from insecure softwareThe FBI has been hacking into the computers of US companies running insecure versions of Microsoft software in order to fix them, the US Department of Justice has announced.The operation, approved by a federal court, involved the FBI hacking into “hundreds” of vulnerable computers to remove malware placed there by an earlier malicious hacking campaign, which Microsoft blamed on a Chinese hacking group known as Hafnium. Related: Documents reveal FBI head defended encryption for WhatsApp before becoming fierce critic Continue reading...

  • Cybersecurity firm Darktrace plans £3bn IPO on London Stock Exchange
    by Mark Sweney, Kalyeena Makortoff and Alex Hern on April 12, 2021 at 9:38 am

    British firm’s CEO Poppy Gustafsson says London was ‘natural choice’ despite Deliveroo’s disastrous debutThe British cybersecurity firm Darktrace has announced plans for a £3bn listing on the London Stock Exchange, providing a shot in the arm for the City after Deliveroo’s disastrous debut damaged the capital’s reputation for big tech “unicorn” flotations.Poppy Gustafsson, the company’s 38-year-old chief executive who holds a stake that will be worth a reported £20m when it floats in about a month, said that London was the “natural choice” despite Deliveroo shares plunging after the food delivery company’s debut last month. Continue reading...

  • Facebook data leak: Australians urged to check and secure social media accounts
    by Mostafa Rachwani on April 5, 2021 at 8:18 am

    Experts urge users to secure accounts and passwords after breach exposes personal details of more than 500 million peopleAustralians are being urged to secure their social media accounts after the details of more than 500 million global Facebook users were found online in a massive data breach. The details published freely online included names, phone numbers, email addresses, account IDs and bios. Related: Australia’s move to tame Facebook and Google is just the start of a global battle | Michelle Meagher Continue reading...

  • Netflix weighs up crackdown on password sharing
    by Mark Sweney on March 12, 2021 at 10:04 am

    Streaming service tests feature that asks viewers if they share household with subscriberNetflix has begun testing a feature that asks viewers whether they share a household with a subscriber, in a move that could lead to crackdown on the widespread practice of sharing passwords among friends and family.Some Netflix users are reported to have received a message asking them to confirm they live with the account owner by entering a code included in a text message or email sent to the subscriber. Continue reading...

  • Join the Team! Announcing the Launch of the NIST Privacy Workforce Public Working Group
    by Dylan Gilbert on April 14, 2021 at 12:00 pm

    When it comes to managing privacy risks, workforce is a key consideration. According to a recent IAPP/FairWarning report, on average, even mature privacy programs have only three employees dedicated to privacy. This is why we included workforce as a priority area in the NIST Privacy Framework Roadmap. The benefits of using the Privacy Framework are enhanced when organizations have a sufficient pool of knowledgeable and skilled privacy professionals to draw from. In response to stakeholder challenges with privacy workforce recruitment and development, we are planning to create a privacy

  • Differential Privacy for Complex Data: Answering Queries Across Multiple Data Tables
    by Xi He on March 25, 2021 at 12:00 pm

    We are excited to introduce our second guest author in this blog series, Xi He, assistant professor of Computer Science at the University of Waterloo, whose research represents the state of the art in the subject of this blog post: answering queries with joins while preserving differential privacy. - Joseph Near and David Darais So far in this blog series, we have discussed the challenges of ensuring differential privacy for queries over a single database table. In practice, however, databases are often organized into multiple tables, and queries over the data involve joins between these

  • Stakeholders: The “Be-All and End-All” of NIST’s Cybersecurity and Privacy Work
    by Kevin Stine on March 24, 2021 at 12:00 pm

    When it comes down to it, NIST’s cybersecurity and privacy work is all about its stakeholders. Our researchers and other staff can do the most extraordinary work to advance the state of the art or solve problems in these areas – but our success truly should only be measured by the difference we make in providing the best possible and most useful tools and information. That’s why we put such a high premium on engaging with the public and private sectors, academia, and other stakeholders. NIST counts on developers, providers, and everyday users of cybersecurity and privacy technologies and

  • NIST Risk Management Framework Team Did Some Spring Cleaning!
    by Victoria Yan Pillitteri on March 15, 2021 at 12:00 pm

    Check out our new and improved Risk Management Framework (RMF) website that better highlights the resources NIST developed to support implementers. In addition to the look, we have: updated the layout of the site to focus on the RMF steps, identified specific resources and tools available for each RMF step, included supporting NIST publications for each RMF step, updated the RMF logo, and Featured resources specific to the NIST Security and Privacy Controls in Special Publication (SP) 800-53, such as: a new, web-based version of the SP 800-53, Revision 5 controls and SP 800-53B control

  • There’s Still Time to Comment on IoT Cybersecurity Guidance – Send Us Your Feedback Today!
    by Michael Fagan on February 24, 2021 at 12:00 pm

    Throughout this snowy winter, NIST has been listening to the valuable feedback received on our recent flurry of IoT cybersecurity guidance drafts, including draft NISTIRs 8259B, 8259C, 8259D, and draft Special Publication 800-213. We have extended the comment deadline for all four draft publications to February 26th, and we hope reviewers will use the extra time to let us know what they think about this exciting new work. To those who have already submitted comments and reviews on the draft publications, thank you! We also want to thank everyone who participated virtually in our January 26th

  • Over 750,000 Users Downloaded New Billing Fraud Apps From Google Play Store
    by noreply@blogger.com (Ravie Lakshmanan) on April 20, 2021 at 4:19 pm

    Researchers have uncovered a new set of fraudulent Android apps in the Google Play store that were found to hijack SMS message notifications for carrying out billing fraud.The apps in question primarily targeted users in Southwest Asia and the Arabian Peninsula, attracting a total of 700,000 downloads before they were discovered and removed from the platform.The findings were reported

  • [eBook] Why Autonomous XDR Is Going to Replace NGAV/EDR
    by noreply@blogger.com (The Hacker News) on April 20, 2021 at 11:06 am

    For most organizations today, endpoint protection is the primary security concern. This is not unreasonable – endpoints tend to be the weakest points in an environment – but it also misses the forest for the trees. As threat surfaces expand, security professionals are harder pressed to detect threats that target other parts of an environment and can easily miss a real vulnerability by focusing

  • 120 Compromised Ad Servers Target Millions of Internet Users
    by noreply@blogger.com (Ravie Lakshmanan) on April 20, 2021 at 10:41 am

    An ongoing malvertising campaign tracked as "Tag Barnakle" has been behind the breach of more than 120 ad servers over the past year to sneakily inject code in an attempt to serve malicious advertisements that redirect users to rogue websites, thus exposing victims to scamware or malware.Unlike other operators who set about their task by infiltrating the ad-tech ecosystem using "convincing

  • Lazarus APT Hackers are now using BMP images to hide RAT malware
    by noreply@blogger.com (Ravie Lakshmanan) on April 20, 2021 at 5:33 am

    A spear-phishing attack operated by a North Korean threat actor targeting its southern counterpart has been found to conceal its malicious code within a bitmap (.BMP) image file to drop a remote access trojan (RAT) capable of stealing sensitive information.Attributing the attack to the Lazarus Group based on similarities to prior tactics adopted by the adversary, researchers from Malwarebytes

  • Malware That Spreads Via Xcode Projects Now Targeting Apple's M1-based Macs
    by noreply@blogger.com (Ravie Lakshmanan) on April 19, 2021 at 11:58 am

    A Mac malware campaign targeting Xcode developers has been retooled to add support for Apple's new M1 chips and expand its features to steal confidential information from cryptocurrency apps.XCSSET came into the spotlight in August 2020 after it was found to spread via modified Xcode IDE projects, which, upon the building, were configured to execute the payload. The malware repackages payload

  • Updating Plugins
    on April 21, 2021 at 5:00 am

    Every plugin or add-on you install in your browser can expose you to more danger. Only install the plugins you need and make sure they are always current. If you no longer need a plugin, disable or remove it from your browser via your browser's plugin preferences.

  • Detecting Fraud
    on April 20, 2021 at 5:00 am

    Review your bank, credit card and financial statements regularly to identify unauthorized activity. This is one of the most effective ways to quickly detect if your bank account, credit card or identity has been compromised.

  • Don't Lose That Device
    on April 19, 2021 at 5:00 am

    Did you know you are 100 times more likely to lose a laptop or mobile devices than have it stolen? When you are traveling, always double-check to make sure you have your devices with you, such as when leaving airport security, exiting your taxi or check out of your hotel.

  • Digital Inheritance
    on April 16, 2021 at 5:00 am

    What happens to our digital presence when we die or become incapacitated? Many of us have or know we should have a will and checklists of what loved ones need to know in the event of our passing. But what about all of our digital data and online accounts? Consider creating some type of digital will, often called a "Digital Inheritance" plan.

  • Secure Your Home Wi-Fi Network
    on April 15, 2021 at 5:00 am

    Be aware of all the devices connected to your home network, including baby monitors, gaming consoles, TVs, appliances or even your car. Ensure all those devices are protected by a strong password and/or are running the latest version of their operating system.

  • Is it Real or not? How to Spot phishing Emails
    by Nir Roditi on April 12, 2021 at 3:41 pm

    It has become virtually impossible to distinguish nowadays between a real and a fake email from a well-known company, especially one you’re likely a customer/member of, as the design, logo, and name seem so real. But knowing which emails are real and which are phishing emails is crucial and can save you money and problems …The post Is it Real or not? How to Spot phishing Emails appeared first on ZoneAlarm Security Blog.

  • 2020’s Top 10 Phishing Brands
    by Nir Roditi on March 4, 2021 at 10:19 am

    With 2020 behind us, it is now possible to take a look back and analyze the different cybercrime trends that took place in order to be more prepared in 2021. One of the most popular form of cyberattacks is phishing, and as it usually comes in the form of emails from well-known brands, they can …The post 2020’s Top 10 Phishing Brands appeared first on ZoneAlarm Security Blog.

  • How Small Businesses Can Avoid Cyberattacks in 2021
    by Danielle Siso on January 19, 2021 at 10:43 am

    Across 2020 – and, most likely, throughout 2021 – the priority of small business owners has been weathering the storm brought on by the coronavirus pandemic. That’s understandable, given the challenges and unique threats from Covid-19. However, the danger posed by cybercriminals has not gone away; in fact, the evidence points to the contrary. The …The post How Small Businesses Can Avoid Cyberattacks in 2021 appeared first on ZoneAlarm Security Blog.

  • Best Practices for Working from Home
    by Danielle Siso on January 12, 2021 at 4:12 pm

    Working from home has become a new reality for many workers across the globe in many industries. The reality is that if your job can be done via a computer, or simply doesn’t require you to be physically present at your office in order for it to be completed, then working from home is the …The post Best Practices for Working from Home appeared first on ZoneAlarm Security Blog.

  • Cybersecurity 2020 in Review
    by Danielle Siso on December 31, 2020 at 12:40 pm

    2020 was a year we will never forget. The year where the words “COVID-19” and “corona” were being said by the entire world in every other sentence. Where takeout food, wearing a mask became the norm. And it wasn’t just the pandemic that caused the world to go into panic mode and uncertainty. The world …The post Cybersecurity 2020 in Review appeared first on ZoneAlarm Security Blog.

 

Digiprove sealCopyright secured by Digiprove © 2020 Çağlar Özdemir
You cannot copy content of this page
%d bloggers like this: